[jira] [Commented] (YARN-6922) Findbugs warning in YARN NodeManager

2017-08-01 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16110354#comment-16110354
 ] 

Naganarasimha G R commented on YARN-6922:
-

I think this is the same issue as YARN-6515, Sorry for the delay from my side 
for the docker related configuration issue. Will discuss and resolve it asap.

> Findbugs warning in YARN NodeManager
> 
>
> Key: YARN-6922
> URL: https://issues.apache.org/jira/browse/YARN-6922
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.0.0-beta1
>Reporter: Xuan Gong
>Assignee: Weiwei Yang
>
> Several findbugs warning in YARN NodeManager package.
> {code}
> org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainerMetrics.usageMetrics
>  is a mutable collection which should be package protected
> Bug type MS_MUTABLE_COLLECTION_PKGPROTECT (click for details) 
> In class 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainerMetrics
> Field 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainerMetrics.usageMetrics
> At ContainerMetrics.java:[line 134]
> {code}
> {code}
>   
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.createStatus()
>  makes inefficient use of keySet iterator instead of entrySet iterator
> Bug type WMI_WRONG_MAP_ITERATOR (click for details) 
> In class 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer
> In method 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.createStatus()
> Field 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.pendingResources
> At ContainerLocalizer.java:[line 357]
> {code}
> {code}
>   
> org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.removeVeryOldStoppedContainersFromCache()
>  makes inefficient use of keySet iterator instead of entrySet iterator
> Bug type WMI_WRONG_MAP_ITERATOR (click for details) 
> In class org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl
> In method 
> org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.removeVeryOldStoppedContainersFromCache()
> Field 
> org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.recentlyStoppedContainers
> At NodeStatusUpdaterImpl.java:[line 719]
> {code}
> {code}
> Hard coded reference to an absolute pathname in 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.DockerLinuxContainerRuntime.launchContainer(ContainerRuntimeContext)
> Bug type DMI_HARDCODED_ABSOLUTE_FILENAME (click for details) 
> In class 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.DockerLinuxContainerRuntime
> In method 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.DockerLinuxContainerRuntime.launchContainer(ContainerRuntimeContext)
> File name /sys/fs/cgroup
> At DockerLinuxContainerRuntime.java:[line 490]
> {code}
> {code}
>   Useless object stored in variable removedNullContainers of method 
> org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.removeOrTrackCompletedContainersFromContext(List)
> Bug type UC_USELESS_OBJECT (click for details) 
> In class org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl
> In method 
> org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.removeOrTrackCompletedContainersFromContext(List)
> Value removedNullContainers
> Type java.util.HashSet
> At NodeStatusUpdaterImpl.java:[line 642]
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6916) Moving logging APIs over to slf4j in hadoop-yarn-server-common

2017-08-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16110353#comment-16110353
 ] 

Hadoop QA commented on YARN-6916:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
 8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common: 
The patch generated 0 new + 89 unchanged - 3 fixed = 89 total (was 92) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
12s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 21m 48s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6916 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12879959/YARN-6916.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 127ffd265a48 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 6814324 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/16663/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/16663/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Moving logging APIs over to slf4j in hadoop-yarn-server-common
> --
>
> Key: YARN-6916
> URL: https://issues.apache.org/jira/browse/YARN-6916
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Akira Ajisaka

[jira] [Commented] (YARN-6741) Deleting all children of a Parent Queue on refresh throws exception

2017-08-01 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6741?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16110347#comment-16110347
 ] 

Sunil G commented on YARN-6741:
---

Thanks [~naganarasimha...@apache.org].  
I understood the idea of minimizing operations by which one can modify queues 
and start working on same. And I think next point makes this more convenient. 
To delete any child queue, that leaf queue has to be stopped first. This covers 
any unwanted ops or issues. hence i think its fine to convert parent queue to 
leaf when all children are deleted(by stopping them first). Since that is 
covered, I have no objection in this current approach.

It will be great if one more test case is added or as part of existing case. 
Out of one child queue of b, let one queue be RUNNING. and lets try to 
reinitialize queue by deleting all 3. Exception will be thrown and we can 
validate same. Post that, we can push b3 to be STOPPED and do as current test 
case does.

> Deleting all children of a Parent Queue on refresh throws exception
> ---
>
> Key: YARN-6741
> URL: https://issues.apache.org/jira/browse/YARN-6741
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Affects Versions: 3.0.0-alpha3
>Reporter: Naganarasimha G R
>Assignee: Naganarasimha G R
> Attachments: YARN-6741.001.patch, YARN-6741.002.patch, 
> YARN-6741.003.patch
>
>
> If we configure CS such that all  children of a parent queue are deleted and 
> made as a leaf queue, then {{refreshQueue}} operation fails when 
> re-initializing the parent Queue
> {code}
>// Sanity check
>   if (!(newlyParsedQueue instanceof ParentQueue) || !newlyParsedQueue
>   .getQueuePath().equals(getQueuePath())) {
> throw new IOException(
> "Trying to reinitialize " + getQueuePath() + " from "
> + newlyParsedQueue.getQueuePath());
>   }
> {code}
> *Expected Behavior:*
> Converting a Parent Queue to leafQueue on refreshQueue needs to be supported.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6916) Moving logging APIs over to slf4j in hadoop-yarn-server-common

2017-08-01 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated YARN-6916:

Attachment: YARN-6916.002.patch

002: rebased

> Moving logging APIs over to slf4j in hadoop-yarn-server-common
> --
>
> Key: YARN-6916
> URL: https://issues.apache.org/jira/browse/YARN-6916
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
> Attachments: YARN-6712.01.patch, YARN-6916.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6922) Findbugs warning in YARN NodeManager

2017-08-01 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16110313#comment-16110313
 ] 

Weiwei Yang commented on YARN-6922:
---

Hi [~xgong], I am taking this over and will submit a patch soon. Thanks

> Findbugs warning in YARN NodeManager
> 
>
> Key: YARN-6922
> URL: https://issues.apache.org/jira/browse/YARN-6922
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.0.0-beta1
>Reporter: Xuan Gong
>Assignee: Weiwei Yang
>
> Several findbugs warning in YARN NodeManager package.
> {code}
> org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainerMetrics.usageMetrics
>  is a mutable collection which should be package protected
> Bug type MS_MUTABLE_COLLECTION_PKGPROTECT (click for details) 
> In class 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainerMetrics
> Field 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainerMetrics.usageMetrics
> At ContainerMetrics.java:[line 134]
> {code}
> {code}
>   
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.createStatus()
>  makes inefficient use of keySet iterator instead of entrySet iterator
> Bug type WMI_WRONG_MAP_ITERATOR (click for details) 
> In class 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer
> In method 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.createStatus()
> Field 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.pendingResources
> At ContainerLocalizer.java:[line 357]
> {code}
> {code}
>   
> org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.removeVeryOldStoppedContainersFromCache()
>  makes inefficient use of keySet iterator instead of entrySet iterator
> Bug type WMI_WRONG_MAP_ITERATOR (click for details) 
> In class org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl
> In method 
> org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.removeVeryOldStoppedContainersFromCache()
> Field 
> org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.recentlyStoppedContainers
> At NodeStatusUpdaterImpl.java:[line 719]
> {code}
> {code}
> Hard coded reference to an absolute pathname in 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.DockerLinuxContainerRuntime.launchContainer(ContainerRuntimeContext)
> Bug type DMI_HARDCODED_ABSOLUTE_FILENAME (click for details) 
> In class 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.DockerLinuxContainerRuntime
> In method 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.DockerLinuxContainerRuntime.launchContainer(ContainerRuntimeContext)
> File name /sys/fs/cgroup
> At DockerLinuxContainerRuntime.java:[line 490]
> {code}
> {code}
>   Useless object stored in variable removedNullContainers of method 
> org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.removeOrTrackCompletedContainersFromContext(List)
> Bug type UC_USELESS_OBJECT (click for details) 
> In class org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl
> In method 
> org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.removeOrTrackCompletedContainersFromContext(List)
> Value removedNullContainers
> Type java.util.HashSet
> At NodeStatusUpdaterImpl.java:[line 642]
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6741) Deleting all children of a Parent Queue on refresh throws exception

2017-08-01 Thread Bibin A Chundatt (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6741?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16110304#comment-16110304
 ] 

Bibin A Chundatt commented on YARN-6741:


[~naganarasimha...@apache.org]
Sorry for the delay. Could you  fix checkstyle issue. Have triggered build 
again to get report.

{quote}
we do not have any limitation of running apps and anyway if all the children 
are already deleted then it implies that the running apps under this 
parentQueue is already zero
{quote}
Agree with this point . i think patch is good to go in.
[~sunilg] Any more comments??

> Deleting all children of a Parent Queue on refresh throws exception
> ---
>
> Key: YARN-6741
> URL: https://issues.apache.org/jira/browse/YARN-6741
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Affects Versions: 3.0.0-alpha3
>Reporter: Naganarasimha G R
>Assignee: Naganarasimha G R
> Attachments: YARN-6741.001.patch, YARN-6741.002.patch, 
> YARN-6741.003.patch
>
>
> If we configure CS such that all  children of a parent queue are deleted and 
> made as a leaf queue, then {{refreshQueue}} operation fails when 
> re-initializing the parent Queue
> {code}
>// Sanity check
>   if (!(newlyParsedQueue instanceof ParentQueue) || !newlyParsedQueue
>   .getQueuePath().equals(getQueuePath())) {
> throw new IOException(
> "Trying to reinitialize " + getQueuePath() + " from "
> + newlyParsedQueue.getQueuePath());
>   }
> {code}
> *Expected Behavior:*
> Converting a Parent Queue to leafQueue on refreshQueue needs to be supported.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6914) Application application_1501553373419_0001 failed 2 times due to AM Container for appattempt_1501553373419_0001_000002 exited with exitCode: -1000

2017-08-01 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16110293#comment-16110293
 ] 

Naganarasimha G R commented on YARN-6914:
-

Sorry i meant Hadoop mailing list, refer all mailing lists @ 
https://hadoop.apache.org/mailing_lists.html
you can send mail to u...@hadoop.apache.org
but seems like spark issue, so first you could try what i suggested if not a 
prod setup.

> Application application_1501553373419_0001 failed 2 times due to AM Container 
> for appattempt_1501553373419_0001_02 exited with exitCode: -1000
> --
>
> Key: YARN-6914
> URL: https://issues.apache.org/jira/browse/YARN-6914
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Affects Versions: 2.7.3
> Environment: Mac OS
>Reporter: abhishek bharani
>Priority: Critical
>   Original Estimate: 336h
>  Remaining Estimate: 336h
>
> I am getting below error while running 
> spark-shell --master yarn
> Application application_1501553373419_0001 failed 2 times due to AM Container 
> for appattempt_1501553373419_0001_02 exited with exitCode: -1000
> For more detailed output, check application tracking 
> page:http://abhisheks-mbp:8088/cluster/app/application_1501553373419_0001Then,
>  click on links to logs of each attempt.
> Diagnostics: null
> Failing this attempt. Failing the application.
> Below are the contents of yarn-site.xml :
> 
> 
> 
>     yarn.nodemanager.aux-services
> mapreduce_shuffle
>     
>
>     
> yarn.nodemanager.aux-services.mapreduce.shuffle.class
> org.apache.hadoop.mapred.ShuffleHandler
>
> 
>     yarn.nodemanager.aux-services.spark_shuffle.class
> 
> org.apache.spark.network.yarn.YarnShuffleService
>     
> 
> yarn.log-aggregation-enable
> true
> 
> 
> 
> yarn.nodemanager.log-aggregation.roll-monitoring-interval-seconds
> 3600
> 
> 
> yarn.resourcemanager.hostname
> localhost
> 
> 
> 
> yarn.resourcemanager.resourcetracker.address
> ${yarn.resourcemanager.hostname}:8025
> Enter your ResourceManager 
> hostname.
> 
> 
> yarn.resourcemanager.scheduler.address
> ${yarn.resourcemanager.hostname}:8035
> Enter your ResourceManager 
> hostname.
> 
> 
> yarn.resourcemanager.address
> ${yarn.resourcemanager.hostname}:8055
> Enter your ResourceManager 
> hostname.
> 
> 
> The http address of the RM web 
> application.
> yarn.resourcemanager.webapp.address
> ${yarn.resourcemanager.hostname}:8088
> 
> I tried many solutions but none of them is working :
> 1.Added property 
> yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage 
> to yarn-site.xml with value as 98.5
> 2.added below property to yarn-site.xml 
> yarn.nodemanager.aux-services.spark_shuffle.class 
> org.apache.spark.network.yarn.YarnShuffleService  
> 3.Added property in spark-defaults.conf 
> spark.yarn.jars=hdfs://localhost:50010/users/spark/jars/*.jar



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6130) [ATSv2 Security] Generate a delegation token for AM when app collector is created and pass it to AM via NM and RM

2017-08-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16110292#comment-16110292
 ] 

Hadoop QA commented on YARN-6130:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  9s{color} 
| {color:red} YARN-6130 does not apply to YARN-5355. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-6130 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12879895/YARN-6130-YARN-5355-branch-2.01.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/16661/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> [ATSv2 Security] Generate a delegation token for AM when app collector is 
> created and pass it to AM via NM and RM
> -
>
> Key: YARN-6130
> URL: https://issues.apache.org/jira/browse/YARN-6130
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Varun Saxena
>Assignee: Varun Saxena
>  Labels: yarn-5355-merge-blocker
> Attachments: YARN-6130-YARN-5355.01.patch, 
> YARN-6130-YARN-5355.02.patch, YARN-6130-YARN-5355.03.patch, 
> YARN-6130-YARN-5355.04.patch, YARN-6130-YARN-5355.05.patch, 
> YARN-6130-YARN-5355.06.patch, YARN-6130-YARN-5355-branch-2.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6130) [ATSv2 Security] Generate a delegation token for AM when app collector is created and pass it to AM via NM and RM

2017-08-01 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16110289#comment-16110289
 ] 

Rohith Sharma K S commented on YARN-6130:
-

I think the branch is too long, so it is not picking up. Let me trigger it again


> [ATSv2 Security] Generate a delegation token for AM when app collector is 
> created and pass it to AM via NM and RM
> -
>
> Key: YARN-6130
> URL: https://issues.apache.org/jira/browse/YARN-6130
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Varun Saxena
>Assignee: Varun Saxena
>  Labels: yarn-5355-merge-blocker
> Attachments: YARN-6130-YARN-5355.01.patch, 
> YARN-6130-YARN-5355.02.patch, YARN-6130-YARN-5355.03.patch, 
> YARN-6130-YARN-5355.04.patch, YARN-6130-YARN-5355.05.patch, 
> YARN-6130-YARN-5355.06.patch, YARN-6130-YARN-5355-branch-2.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6594) [API] Introduce SchedulingRequest object

2017-08-01 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16110288#comment-16110288
 ] 

Jian He commented on YARN-6594:
---

On a second thought, if you think a set is more suitable for placement 
scheduling because it essentially has only one type, that is the PLACEMENT.  
I'm also fine to keep as-is. 
Essentially, the use-case is different, no need to be forced to be the same. 
I'll pursue what I need separately. 

> [API] Introduce SchedulingRequest object
> 
>
> Key: YARN-6594
> URL: https://issues.apache.org/jira/browse/YARN-6594
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
> Attachments: YARN-6594.001.patch
>
>
> This JIRA introduces a new SchedulingRequest object.
> It will be part of the {{AllocateRequest}} and will be used to define sizing 
> (e.g., number of allocations, size of allocations) and placement constraints 
> for allocations.
> Applications can use either this new object (when rich placement constraints 
> are required) or the existing {{ResourceRequest}} object.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6914) Application application_1501553373419_0001 failed 2 times due to AM Container for appattempt_1501553373419_0001_000002 exited with exitCode: -1000

2017-08-01 Thread abhishek bharani (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16110283#comment-16110283
 ] 

abhishek bharani commented on YARN-6914:


Sure [~naganarasimha...@apache.org], Thank you for your support !
I didn't knew that we first need to raise issues in forums.  I raised this 
issue on other forums like stack overflow but din't received any response..
Could you please provide the link to the forum. 

> Application application_1501553373419_0001 failed 2 times due to AM Container 
> for appattempt_1501553373419_0001_02 exited with exitCode: -1000
> --
>
> Key: YARN-6914
> URL: https://issues.apache.org/jira/browse/YARN-6914
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Affects Versions: 2.7.3
> Environment: Mac OS
>Reporter: abhishek bharani
>Priority: Critical
>   Original Estimate: 336h
>  Remaining Estimate: 336h
>
> I am getting below error while running 
> spark-shell --master yarn
> Application application_1501553373419_0001 failed 2 times due to AM Container 
> for appattempt_1501553373419_0001_02 exited with exitCode: -1000
> For more detailed output, check application tracking 
> page:http://abhisheks-mbp:8088/cluster/app/application_1501553373419_0001Then,
>  click on links to logs of each attempt.
> Diagnostics: null
> Failing this attempt. Failing the application.
> Below are the contents of yarn-site.xml :
> 
> 
> 
>     yarn.nodemanager.aux-services
> mapreduce_shuffle
>     
>
>     
> yarn.nodemanager.aux-services.mapreduce.shuffle.class
> org.apache.hadoop.mapred.ShuffleHandler
>
> 
>     yarn.nodemanager.aux-services.spark_shuffle.class
> 
> org.apache.spark.network.yarn.YarnShuffleService
>     
> 
> yarn.log-aggregation-enable
> true
> 
> 
> 
> yarn.nodemanager.log-aggregation.roll-monitoring-interval-seconds
> 3600
> 
> 
> yarn.resourcemanager.hostname
> localhost
> 
> 
> 
> yarn.resourcemanager.resourcetracker.address
> ${yarn.resourcemanager.hostname}:8025
> Enter your ResourceManager 
> hostname.
> 
> 
> yarn.resourcemanager.scheduler.address
> ${yarn.resourcemanager.hostname}:8035
> Enter your ResourceManager 
> hostname.
> 
> 
> yarn.resourcemanager.address
> ${yarn.resourcemanager.hostname}:8055
> Enter your ResourceManager 
> hostname.
> 
> 
> The http address of the RM web 
> application.
> yarn.resourcemanager.webapp.address
> ${yarn.resourcemanager.hostname}:8088
> 
> I tried many solutions but none of them is working :
> 1.Added property 
> yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage 
> to yarn-site.xml with value as 98.5
> 2.added below property to yarn-site.xml 
> yarn.nodemanager.aux-services.spark_shuffle.class 
> org.apache.spark.network.yarn.YarnShuffleService  
> 3.Added property in spark-defaults.conf 
> spark.yarn.jars=hdfs://localhost:50010/users/spark/jars/*.jar



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6593) [API] Introduce Placement Constraint object

2017-08-01 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16110278#comment-16110278
 ] 

Jian He commented on YARN-6593:
---

Just commented on YARN-6594.

> [API] Introduce Placement Constraint object
> ---
>
> Key: YARN-6593
> URL: https://issues.apache.org/jira/browse/YARN-6593
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
> Attachments: YARN-6593.001.patch, YARN-6593.002.patch, 
> YARN-6593.003.patch, YARN-6593.004.patch, YARN-6593.005.patch, 
> YARN-6593.006.patch, YARN-6593.007.patch, YARN-6593.008.patch
>
>
> Just removed Fixed version and moved it to target version as we set fix 
> version only after patch is committed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-6594) [API] Introduce SchedulingRequest object

2017-08-01 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16110276#comment-16110276
 ] 

Jian He edited comment on YARN-6594 at 8/2/17 4:23 AM:
---

Copy some context from YARN-6593
bq.  The use-case is to be able to select containers based on key and values, 
say, I want to find my containers with version=v1, and env = test, and foo 
!=bar and name=web1 or name = web2. These are not used for making scheduling 
decisions. But for annotating metaInfo to the containers.  We can still make it 
work by searching the entire string as Arun said. But that's not explicit. The 
AM, client, or even UI then needs to parse the string to extract the keys and 
values.

To support this, I probably need to add a separate filed (map) for this, call 
it containerTags. and then the allocationTag in this jira can probably be named 
as placementTags. (Naming can be figured out later)

The questions is that the allocationTag right now is modeled as set, do we need 
to make it as a key/value pair to be consistent ? In any case, a map can 
support whatever a set can support, but it's not true the other way around. so 
I think it will be more flexible. 

I will pursue what I need as containerTags in a separate jira, but thought the 
API might better be consistent.



was (Author: jianhe):
Copy some context from YARN-6593
bq.  The use-case is to be able to select containers based on key and values, 
say, I want to find my containers with version=v1, and env = test, and foo 
!=bar and name=web1 or name = web2. These are not used for making scheduling 
decisions. But for annotating metaInfo to the containers.  We can still make it 
work by searching the entire string as Arun said. But that's not explicit. The 
AM, client, or even UI then needs to parse the string to extract the keys and 
values.

To support this, I probably need to add a separate filed (map) for this, call 
it containerTags. and then the allocationTag in this jira can probably be named 
as placementTags. (Naming can be figured out later)

The questions is that the allocationTag right now is modeled as set, do we need 
to make it as a key/value pair to be consistent ? In any case, a map can 
support whatever a set can support, but it's not true the other way around. so 
I think it will be more flexible. 

I will pursue what I need as containerTags in a separate jira, but thought the 
API might be better consistent.


> [API] Introduce SchedulingRequest object
> 
>
> Key: YARN-6594
> URL: https://issues.apache.org/jira/browse/YARN-6594
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
> Attachments: YARN-6594.001.patch
>
>
> This JIRA introduces a new SchedulingRequest object.
> It will be part of the {{AllocateRequest}} and will be used to define sizing 
> (e.g., number of allocations, size of allocations) and placement constraints 
> for allocations.
> Applications can use either this new object (when rich placement constraints 
> are required) or the existing {{ResourceRequest}} object.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-6594) [API] Introduce SchedulingRequest object

2017-08-01 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16110276#comment-16110276
 ] 

Jian He edited comment on YARN-6594 at 8/2/17 4:22 AM:
---

Copy some context from YARN-6593
bq.  The use-case is to be able to select containers based on key and values, 
say, I want to find my containers with version=v1, and env = test, and foo 
!=bar and name=web1 or name = web2. These are not used for making scheduling 
decisions. But for annotating metaInfo to the containers.  We can still make it 
work by searching the entire string as Arun said. But that's not explicit. The 
AM, client, or even UI then needs to parse the string to extract the keys and 
values.

To support this, I probably need to add a separate filed (map) for this, call 
it containerTags. and then the allocationTag in this jira can probably be named 
as placementTags. (Naming can be figured out later)

The questions is that the allocationTag right now is modeled as set, do we need 
to make it as a key/value pair to be consistent ? In any case, a map can 
support whatever a set can support, but it's not true the other way around. so 
I think it will be more flexible. 

I will pursue what I need as containerTags in a separate jira, but thought the 
API might be better consistent.



was (Author: jianhe):
Copy some context from YARN-6593
bq.  The use-case is to be able to select containers based on key and values, 
say, I want to find my containers with version=v1, and env = test, and foo 
!=bar and name=web1 or name = web2. These are not used for making scheduling 
decisions. But for annotating metaInfo to the containers.  We can still make it 
work by searching the entire string as Arun said. But that's not explicit. The 
AM, client, or even UI then needs to parse the string to extract the keys and 
values.

To support this, I probably need to add a separate filed (map) for this, call 
it containerTags. and then the allocationTag in this jira can probably be named 
as placementTags.

The questions is that the allocationTag right now is modeled as set, do we need 
to make it as a key/value pair to be consistent ? In any case, a map can 
support whatever a set can support, but it's not true the other way around. so 
I think it will be more flexible. 

I will pursue what I need as containerTags in a separate jira, but thought the 
API might be better consistent.


> [API] Introduce SchedulingRequest object
> 
>
> Key: YARN-6594
> URL: https://issues.apache.org/jira/browse/YARN-6594
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
> Attachments: YARN-6594.001.patch
>
>
> This JIRA introduces a new SchedulingRequest object.
> It will be part of the {{AllocateRequest}} and will be used to define sizing 
> (e.g., number of allocations, size of allocations) and placement constraints 
> for allocations.
> Applications can use either this new object (when rich placement constraints 
> are required) or the existing {{ResourceRequest}} object.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6594) [API] Introduce SchedulingRequest object

2017-08-01 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16110276#comment-16110276
 ] 

Jian He commented on YARN-6594:
---

Copy some context from YARN-6593
bq.  The use-case is to be able to select containers based on key and values, 
say, I want to find my containers with version=v1, and env = test, and foo 
!=bar and name=web1 or name = web2. These are not used for making scheduling 
decisions. But for annotating metaInfo to the containers.  We can still make it 
work by searching the entire string as Arun said. But that's not explicit. The 
AM, client, or even UI then needs to parse the string to extract the keys and 
values.

To support this, I probably need to add a separate filed (map) for this, call 
it containerTags. and then the allocationTag in this jira can probably be named 
as placementTags.

The questions is that the allocationTag right now is modeled as set, do we need 
to make it as a key/value pair to be consistent ? In any case, a map can 
support whatever a set can support, but it's not true the other way around. so 
I think it will be more flexible. 

I will pursue what I need as containerTags in a separate jira, but thought the 
API might be better consistent.


> [API] Introduce SchedulingRequest object
> 
>
> Key: YARN-6594
> URL: https://issues.apache.org/jira/browse/YARN-6594
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
> Attachments: YARN-6594.001.patch
>
>
> This JIRA introduces a new SchedulingRequest object.
> It will be part of the {{AllocateRequest}} and will be used to define sizing 
> (e.g., number of allocations, size of allocations) and placement constraints 
> for allocations.
> Applications can use either this new object (when rich placement constraints 
> are required) or the existing {{ResourceRequest}} object.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-6593) [API] Introduce Placement Constraint object

2017-08-01 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16110248#comment-16110248
 ] 

Jian He edited comment on YARN-6593 at 8/2/17 4:05 AM:
---

The use-case is to be able to select containers based on key and values, say, I 
want to find my containers with version=v1, and env = test, and foo !=bar and 
name=web1 or name = web2. Yes, currently  it's a set of values not a single 
value, I misspoke. 
We can still make it work by searching the entire string as Arun said. But 
that's not explicit. The AM, client, or even UI then needs to parse the string 
to extract the keys and values. 

I was thinking this change will affect this jira's implementation also, hence 
continue commenting here. If you think this anyways can be revisited. We can 
commit this first and work on it in YARN-6594. It doesn't matter to me. 


was (Author: jianhe):
The use-case is to be able to select containers based on key and values, say, I 
want to find my containers with version=v1, and env= test, and name=web1 or 
name = web2. Yes, currently  it's a set of values not a single value, I 
misspoke. 
We can still make it work by searching the entire string as Arun said. But 
that's not explicit. The AM, client, or even UI then needs to parse the string 
to extract the keys and values. 

I was thinking this change will affect this jira's implementation also, hence 
continue commenting here. If you think this anyways can be revisited. We can 
commit this first and work on it in YARN-6594. It doesn't matter to me. 

> [API] Introduce Placement Constraint object
> ---
>
> Key: YARN-6593
> URL: https://issues.apache.org/jira/browse/YARN-6593
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
> Attachments: YARN-6593.001.patch, YARN-6593.002.patch, 
> YARN-6593.003.patch, YARN-6593.004.patch, YARN-6593.005.patch, 
> YARN-6593.006.patch, YARN-6593.007.patch, YARN-6593.008.patch
>
>
> Just removed Fixed version and moved it to target version as we set fix 
> version only after patch is committed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6593) [API] Introduce Placement Constraint object

2017-08-01 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16110248#comment-16110248
 ] 

Jian He commented on YARN-6593:
---

The use-case is to be able to select containers based on key and values, say, I 
want to find my containers with version=v1, and env= test, and name=web1 or 
name = web2. Yes, currently  it's a set of values not a single value, I 
misspoke. 
We can still make it work by searching the entire string as Arun said. But 
that's not explicit. The AM, client, or even UI then needs to parse the string 
to extract the keys and values. 

I was thinking this change will affect this jira's implementation also, hence 
continue commenting here. If you think this anyways can be revisited. We can 
commit this first and work on it in YARN-6594. It doesn't matter to me. 

> [API] Introduce Placement Constraint object
> ---
>
> Key: YARN-6593
> URL: https://issues.apache.org/jira/browse/YARN-6593
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
> Attachments: YARN-6593.001.patch, YARN-6593.002.patch, 
> YARN-6593.003.patch, YARN-6593.004.patch, YARN-6593.005.patch, 
> YARN-6593.006.patch, YARN-6593.007.patch, YARN-6593.008.patch
>
>
> Just removed Fixed version and moved it to target version as we set fix 
> version only after patch is committed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5977) ContainerManagementProtocol changes to support change of container ExecutionType

2017-08-01 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16110247#comment-16110247
 ] 

Arun Suresh commented on YARN-5977:
---

Thanks for all the work here [~kartheek].
The testcase failure should be handled by YARN-6920.

Although your patch handles the increaseContainer deprecation by routing the 
call thru the new updateContainer() API, We should open a new JIRA to expose 
updateContainer() API via the NMClient.

Otherwise, the patch looks generally good.
+1 pending the findbugs / checkstyle and javac fixes.


> ContainerManagementProtocol changes to support change of container 
> ExecutionType
> 
>
> Key: YARN-5977
> URL: https://issues.apache.org/jira/browse/YARN-5977
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: kartheek muthyala
> Attachments: YARN-5977.001.patch, YARN-5977.002.patch, 
> YARN-5977.003.patch, YARN-5977.004.patch, YARN-5977.005.patch
>
>
> JIRA to track the following changes:
> * Changes in the ContainerManagementProtocol - add an {{updateContainer()}} 
> method.
> * Add the new Request and Response Objects and their corresponding PBImpl 
> classes.
> * Add deprecate attribute to {{increaseContainersResouce()}} method - since 
> this functionality will be subsumed by {{updateContainer()}}
> * Changes in NMClient to deprecate increaseContainer methods and route all 
> calls through the new updateContainer API
> * On the NM side, route increaseContainer calls to the new updateContainer 
> method.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-6922) Findbugs warning in YARN NodeManager

2017-08-01 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang reassigned YARN-6922:
-

Assignee: Weiwei Yang

> Findbugs warning in YARN NodeManager
> 
>
> Key: YARN-6922
> URL: https://issues.apache.org/jira/browse/YARN-6922
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.0.0-beta1
>Reporter: Xuan Gong
>Assignee: Weiwei Yang
>
> Several findbugs warning in YARN NodeManager package.
> {code}
> org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainerMetrics.usageMetrics
>  is a mutable collection which should be package protected
> Bug type MS_MUTABLE_COLLECTION_PKGPROTECT (click for details) 
> In class 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainerMetrics
> Field 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainerMetrics.usageMetrics
> At ContainerMetrics.java:[line 134]
> {code}
> {code}
>   
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.createStatus()
>  makes inefficient use of keySet iterator instead of entrySet iterator
> Bug type WMI_WRONG_MAP_ITERATOR (click for details) 
> In class 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer
> In method 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.createStatus()
> Field 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.pendingResources
> At ContainerLocalizer.java:[line 357]
> {code}
> {code}
>   
> org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.removeVeryOldStoppedContainersFromCache()
>  makes inefficient use of keySet iterator instead of entrySet iterator
> Bug type WMI_WRONG_MAP_ITERATOR (click for details) 
> In class org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl
> In method 
> org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.removeVeryOldStoppedContainersFromCache()
> Field 
> org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.recentlyStoppedContainers
> At NodeStatusUpdaterImpl.java:[line 719]
> {code}
> {code}
> Hard coded reference to an absolute pathname in 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.DockerLinuxContainerRuntime.launchContainer(ContainerRuntimeContext)
> Bug type DMI_HARDCODED_ABSOLUTE_FILENAME (click for details) 
> In class 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.DockerLinuxContainerRuntime
> In method 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.DockerLinuxContainerRuntime.launchContainer(ContainerRuntimeContext)
> File name /sys/fs/cgroup
> At DockerLinuxContainerRuntime.java:[line 490]
> {code}
> {code}
>   Useless object stored in variable removedNullContainers of method 
> org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.removeOrTrackCompletedContainersFromContext(List)
> Bug type UC_USELESS_OBJECT (click for details) 
> In class org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl
> In method 
> org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.removeOrTrackCompletedContainersFromContext(List)
> Value removedNullContainers
> Type java.util.HashSet
> At NodeStatusUpdaterImpl.java:[line 642]
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Resolved] (YARN-6914) Application application_1501553373419_0001 failed 2 times due to AM Container for appattempt_1501553373419_0001_000002 exited with exitCode: -1000

2017-08-01 Thread Naganarasimha G R (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6914?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naganarasimha G R resolved YARN-6914.
-
Resolution: Invalid

> Application application_1501553373419_0001 failed 2 times due to AM Container 
> for appattempt_1501553373419_0001_02 exited with exitCode: -1000
> --
>
> Key: YARN-6914
> URL: https://issues.apache.org/jira/browse/YARN-6914
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Affects Versions: 2.7.3
> Environment: Mac OS
>Reporter: abhishek bharani
>Priority: Critical
>   Original Estimate: 336h
>  Remaining Estimate: 336h
>
> I am getting below error while running 
> spark-shell --master yarn
> Application application_1501553373419_0001 failed 2 times due to AM Container 
> for appattempt_1501553373419_0001_02 exited with exitCode: -1000
> For more detailed output, check application tracking 
> page:http://abhisheks-mbp:8088/cluster/app/application_1501553373419_0001Then,
>  click on links to logs of each attempt.
> Diagnostics: null
> Failing this attempt. Failing the application.
> Below are the contents of yarn-site.xml :
> 
> 
> 
>     yarn.nodemanager.aux-services
> mapreduce_shuffle
>     
>
>     
> yarn.nodemanager.aux-services.mapreduce.shuffle.class
> org.apache.hadoop.mapred.ShuffleHandler
>
> 
>     yarn.nodemanager.aux-services.spark_shuffle.class
> 
> org.apache.spark.network.yarn.YarnShuffleService
>     
> 
> yarn.log-aggregation-enable
> true
> 
> 
> 
> yarn.nodemanager.log-aggregation.roll-monitoring-interval-seconds
> 3600
> 
> 
> yarn.resourcemanager.hostname
> localhost
> 
> 
> 
> yarn.resourcemanager.resourcetracker.address
> ${yarn.resourcemanager.hostname}:8025
> Enter your ResourceManager 
> hostname.
> 
> 
> yarn.resourcemanager.scheduler.address
> ${yarn.resourcemanager.hostname}:8035
> Enter your ResourceManager 
> hostname.
> 
> 
> yarn.resourcemanager.address
> ${yarn.resourcemanager.hostname}:8055
> Enter your ResourceManager 
> hostname.
> 
> 
> The http address of the RM web 
> application.
> yarn.resourcemanager.webapp.address
> ${yarn.resourcemanager.hostname}:8088
> 
> I tried many solutions but none of them is working :
> 1.Added property 
> yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage 
> to yarn-site.xml with value as 98.5
> 2.added below property to yarn-site.xml 
> yarn.nodemanager.aux-services.spark_shuffle.class 
> org.apache.spark.network.yarn.YarnShuffleService  
> 3.Added property in spark-defaults.conf 
> spark.yarn.jars=hdfs://localhost:50010/users/spark/jars/*.jar



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6914) Application application_1501553373419_0001 failed 2 times due to AM Container for appattempt_1501553373419_0001_000002 exited with exitCode: -1000

2017-08-01 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16110239#comment-16110239
 ] 

Naganarasimha G R commented on YARN-6914:
-

Hi [~bharaniabhishek123],
Generally the approach which is followed is raise your issue in the forum and 
if in the forum it gets confirmed to be a defect then raise it in jira. Please 
ensure next time we follow this procedure else every query will become a jira !
Will close this issue, please raise this in the forum and if required lets 
reopen this issue.

And coming to the logs, has the NM started ? seems to be a spark issue may be 
you can check in the spark forum.
One possible reason would be spark aux service leveldb files might have got 
corrupted, if not production cluster please empty/backup the dir " 
/usr/local/hadoop/tmp/nm-local-dir/"  and then try.






> Application application_1501553373419_0001 failed 2 times due to AM Container 
> for appattempt_1501553373419_0001_02 exited with exitCode: -1000
> --
>
> Key: YARN-6914
> URL: https://issues.apache.org/jira/browse/YARN-6914
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Affects Versions: 2.7.3
> Environment: Mac OS
>Reporter: abhishek bharani
>Priority: Critical
>   Original Estimate: 336h
>  Remaining Estimate: 336h
>
> I am getting below error while running 
> spark-shell --master yarn
> Application application_1501553373419_0001 failed 2 times due to AM Container 
> for appattempt_1501553373419_0001_02 exited with exitCode: -1000
> For more detailed output, check application tracking 
> page:http://abhisheks-mbp:8088/cluster/app/application_1501553373419_0001Then,
>  click on links to logs of each attempt.
> Diagnostics: null
> Failing this attempt. Failing the application.
> Below are the contents of yarn-site.xml :
> 
> 
> 
>     yarn.nodemanager.aux-services
> mapreduce_shuffle
>     
>
>     
> yarn.nodemanager.aux-services.mapreduce.shuffle.class
> org.apache.hadoop.mapred.ShuffleHandler
>
> 
>     yarn.nodemanager.aux-services.spark_shuffle.class
> 
> org.apache.spark.network.yarn.YarnShuffleService
>     
> 
> yarn.log-aggregation-enable
> true
> 
> 
> 
> yarn.nodemanager.log-aggregation.roll-monitoring-interval-seconds
> 3600
> 
> 
> yarn.resourcemanager.hostname
> localhost
> 
> 
> 
> yarn.resourcemanager.resourcetracker.address
> ${yarn.resourcemanager.hostname}:8025
> Enter your ResourceManager 
> hostname.
> 
> 
> yarn.resourcemanager.scheduler.address
> ${yarn.resourcemanager.hostname}:8035
> Enter your ResourceManager 
> hostname.
> 
> 
> yarn.resourcemanager.address
> ${yarn.resourcemanager.hostname}:8055
> Enter your ResourceManager 
> hostname.
> 
> 
> The http address of the RM web 
> application.
> yarn.resourcemanager.webapp.address
> ${yarn.resourcemanager.hostname}:8088
> 
> I tried many solutions but none of them is working :
> 1.Added property 
> yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage 
> to yarn-site.xml with value as 98.5
> 2.added below property to yarn-site.xml 
> yarn.nodemanager.aux-services.spark_shuffle.class 
> org.apache.spark.network.yarn.YarnShuffleService  
> 3.Added property in spark-defaults.conf 
> spark.yarn.jars=hdfs://localhost:50010/users/spark/jars/*.jar



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5977) ContainerManagementProtocol changes to support change of container ExecutionType

2017-08-01 Thread kartheek muthyala (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5977?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

kartheek muthyala updated YARN-5977:

Attachment: YARN-5977.005.patch

> ContainerManagementProtocol changes to support change of container 
> ExecutionType
> 
>
> Key: YARN-5977
> URL: https://issues.apache.org/jira/browse/YARN-5977
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: kartheek muthyala
> Attachments: YARN-5977.001.patch, YARN-5977.002.patch, 
> YARN-5977.003.patch, YARN-5977.004.patch, YARN-5977.005.patch
>
>
> JIRA to track the following changes:
> * Changes in the ContainerManagementProtocol - add an {{updateContainer()}} 
> method.
> * Add the new Request and Response Objects and their corresponding PBImpl 
> classes.
> * Add deprecate attribute to {{increaseContainersResouce()}} method - since 
> this functionality will be subsumed by {{updateContainer()}}
> * Changes in NMClient to deprecate increaseContainer methods and route all 
> calls through the new updateContainer API
> * On the NM side, route increaseContainer calls to the new updateContainer 
> method.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-6550) Capture launch_container.sh logs

2017-08-01 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6550?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16110225#comment-16110225
 ] 

Allen Wittenauer edited comment on YARN-6550 at 8/2/17 3:37 AM:


If you group this, you won't have to set it per line.  i.e.:

{code}

#!/usr/bin/env bash

STDERR=/tmp/err.log
STDOUT=/tmp/out.log

{

  cmd
  cmd
  cmd

} 2>"${STDERR}" | tee -a "${STDERR}" > "${STDOUT}"

{code}

(or whatever) In shell, it's generally not useful to split stderr away from 
stdout when debugging.  The above should split stdout into a file and 
stdout+stderr into another.  (I'm doing this on the fly so that might not be 
100% correct syntax haha)

It's probably also worth pointing out that several paths will have unexpected 
results if they have spaces due to lack of quotes.  Probably be a good idea to 
run the computed shell script through shellcheck to find errors like that.  
(Although it won't catch e.g., -Dyarn.app.container.log.dir=, since that's 
hard-set.)


was (Author: aw):
If you group this, you won't have to set it per line.  i.e.:

{code}

#!/usr/bin/env bash

STDERR=/tmp/err.log
STDOUT=/tmp/out.log

{

  cmd
  cmd
  cmd

} 2>"${stderr}" | tee -a "${stderr}" > "${stdout}"

{code}

(or whatever) In shell, it's generally not useful to split stderr away from 
stdout when debugging.  The above should split stdout into a file and 
stdout+stderr into another.  (I'm doing this on the fly so that might not be 
100% correct syntax haha)

It's probably also worth pointing out that several paths will have unexpected 
results if they have spaces due to lack of quotes.  Probably be a good idea to 
run the computed shell script through shellcheck to find errors like that.  
(Although it won't catch e.g., -Dyarn.app.container.log.dir=, since that's 
hard-set.)

> Capture launch_container.sh logs
> 
>
> Key: YARN-6550
> URL: https://issues.apache.org/jira/browse/YARN-6550
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-beta1
>Reporter: Wangda Tan
>Assignee: Suma Shivaprasad
> Fix For: 3.0.0-beta1
>
> Attachments: YARN-6550.002.patch, YARN-6550.patch
>
>
> launch_container.sh which generated by NM will do a bunch of things (like 
> create link, etc.) while launch a process. No logs captured until {{exec}} is 
> called. We need capture all failures of launch_container.sh for easier 
> troubleshooting.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6550) Capture launch_container.sh logs

2017-08-01 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6550?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16110225#comment-16110225
 ] 

Allen Wittenauer commented on YARN-6550:


If you group this, you won't have to set it per line.  i.e.:

{code}

#!/usr/bin/env bash

STDERR=/tmp/err.log
STDOUT=/tmp/out.log

{

  cmd
  cmd
  cmd

} 2>"${stderr}" | tee -a "${stderr}" > "${stdout}"

{code}

(or whatever) In shell, it's generally not useful to split stderr away from 
stdout when debugging.  The above should split stdout into a file and 
stdout+stderr into another.  (I'm doing this on the fly so that might not be 
100% correct syntax haha)

It's probably also worth pointing out that several paths will have unexpected 
results if they have spaces due to lack of quotes.  Probably be a good idea to 
run the computed shell script through shellcheck to find errors like that.  
(Although it won't catch e.g., -Dyarn.app.container.log.dir=, since that's 
hard-set.)

> Capture launch_container.sh logs
> 
>
> Key: YARN-6550
> URL: https://issues.apache.org/jira/browse/YARN-6550
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-beta1
>Reporter: Wangda Tan
>Assignee: Suma Shivaprasad
> Fix For: 3.0.0-beta1
>
> Attachments: YARN-6550.002.patch, YARN-6550.patch
>
>
> launch_container.sh which generated by NM will do a bunch of things (like 
> create link, etc.) while launch a process. No logs captured until {{exec}} is 
> called. We need capture all failures of launch_container.sh for easier 
> troubleshooting.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6593) [API] Introduce Placement Constraint object

2017-08-01 Thread Konstantinos Karanasos (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16110175#comment-16110175
 ] 

Konstantinos Karanasos commented on YARN-6593:
--

Regarding the class with examples, as you say, let's move examples to a 
separate JIRA. Especially given that placement constraints can be specified 
both at the container level (YARN-6594) and the application level (YARN-6595). 
It would be good to give all the possible ways to define constraints there.

I would like to understand better the use cases that would require allocation 
tags to have values. But let's move the discussion to YARN-6594.
BTW, to be precise, allocation tags in constraints are modeled as a set of 
values (not a single value) with key being null. We opted for this approach 
after discussions with [~leftnoteasy] and [~arun.sur...@gmail.com] in order to 
not need multiple constraint objects if we want to specify a list of tags in 
the constraint.

PS: I am traveling today, thus the late responses.

> [API] Introduce Placement Constraint object
> ---
>
> Key: YARN-6593
> URL: https://issues.apache.org/jira/browse/YARN-6593
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
> Attachments: YARN-6593.001.patch, YARN-6593.002.patch, 
> YARN-6593.003.patch, YARN-6593.004.patch, YARN-6593.005.patch, 
> YARN-6593.006.patch, YARN-6593.007.patch, YARN-6593.008.patch
>
>
> Just removed Fixed version and moved it to target version as we set fix 
> version only after patch is committed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4161) Capacity Scheduler : Assign single or multiple containers per heart beat driven by configuration

2017-08-01 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16110150#comment-16110150
 ] 

Sunil G commented on YARN-4161:
---

+1 LGTM

> Capacity Scheduler : Assign single or multiple containers per heart beat 
> driven by configuration
> 
>
> Key: YARN-4161
> URL: https://issues.apache.org/jira/browse/YARN-4161
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: capacity scheduler
>Reporter: Mayank Bansal
>Assignee: Wei Yan
>  Labels: oct16-medium
> Attachments: YARN-4161.002.patch, YARN-4161.003.patch, 
> YARN-4161.004.patch, YARN-4161.005.patch, YARN-4161.006.patch, 
> YARN-4161.patch, YARN-4161.patch.1
>
>
> Capacity Scheduler right now schedules multiple containers per heart beat if 
> there are more resources available in the node.
> This approach works fine however in some cases its not distribute the load 
> across the cluster hence throughput of the cluster suffers. I am adding 
> feature to drive that using configuration by that we can control the number 
> of containers assigned per heart beat.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6920) Fix TestNMClient failure due to YARN-6706

2017-08-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16110149#comment-16110149
 ] 

Hadoop QA commented on YARN-6920:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
42s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 in trunk has 5 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 13m  
7s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 33m 41s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6920 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12879941/YARN-6920.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 2dea263a73f7 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 6814324 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-YARN-Build/16659/artifact/patchprocess/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager-warnings.html
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/16659/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/16659/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Fix TestNMClient failure due to YARN-6706
> -
>
> Key: YARN-6920
> URL: https://issues.apache.org/jira/browse/YARN-6920
> Project: Hadoop YARN
>  Issue Type: Bug
>

[jira] [Commented] (YARN-6920) Fix TestNMClient failure due to YARN-6706

2017-08-01 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16110113#comment-16110113
 ] 

Arun Suresh commented on YARN-6920:
---

Think I posted an incomplete patch earlier - updated it.
[~haibochen] / [~jianhe] - quick review ?

> Fix TestNMClient failure due to YARN-6706
> -
>
> Key: YARN-6920
> URL: https://issues.apache.org/jira/browse/YARN-6920
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: YARN-6920.001.patch, YARN-6920.002.patch
>
>
> Looks like {{TestNMClient}} has been failing for a while. Opening this JIRA 
> to track the fix.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6920) Fix TestNMClient failure due to YARN-6706

2017-08-01 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-6920:
--
Summary: Fix TestNMClient failure due to YARN-6706  (was: fix TestNMClient 
failure due to YARN-6706)

> Fix TestNMClient failure due to YARN-6706
> -
>
> Key: YARN-6920
> URL: https://issues.apache.org/jira/browse/YARN-6920
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: YARN-6920.001.patch, YARN-6920.002.patch
>
>
> Looks like {{TestNMClient}} has been failing for a while. Opening this JIRA 
> to track the fix.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6920) fix TestNMClient failure due to YARN-6706

2017-08-01 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-6920:
--
Attachment: YARN-6920.002.patch

> fix TestNMClient failure due to YARN-6706
> -
>
> Key: YARN-6920
> URL: https://issues.apache.org/jira/browse/YARN-6920
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: YARN-6920.001.patch, YARN-6920.002.patch
>
>
> Looks like {{TestNMClient}} has been failing for a while. Opening this JIRA 
> to track the fix.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6920) fix TestNMClient failure due to YARN-6706

2017-08-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16110086#comment-16110086
 ] 

Hadoop QA commented on YARN-6920:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
49s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 in trunk has 5 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 13m 
42s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 35m 58s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6920 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12879937/YARN-6920.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux e7cf421177a4 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 9625a03 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-YARN-Build/16658/artifact/patchprocess/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager-warnings.html
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/16658/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/16658/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> fix TestNMClient failure due to YARN-6706
> -
>
> Key: YARN-6920
> URL: 

[jira] [Assigned] (YARN-6920) fix TestNMClient failure due to YARN-6706

2017-08-01 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen reassigned YARN-6920:


Assignee: Arun Suresh  (was: Haibo Chen)

> fix TestNMClient failure due to YARN-6706
> -
>
> Key: YARN-6920
> URL: https://issues.apache.org/jira/browse/YARN-6920
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: YARN-6920.001.patch
>
>
> Looks like {{TestNMClient}} has been failing for a while. Opening this JIRA 
> to track the fix.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6920) fix TestNMClient failure due to YARN-6706

2017-08-01 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16110047#comment-16110047
 ] 

Haibo Chen commented on YARN-6920:
--

Assigned to you as such.

> fix TestNMClient failure due to YARN-6706
> -
>
> Key: YARN-6920
> URL: https://issues.apache.org/jira/browse/YARN-6920
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: YARN-6920.001.patch
>
>
> Looks like {{TestNMClient}} has been failing for a while. Opening this JIRA 
> to track the fix.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6842) Implement a new access type for queue

2017-08-01 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6842?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16110043#comment-16110043
 ] 

Naganarasimha G R commented on YARN-6842:
-

[~daemon], 
Well even if we have a single effective use case, i think we can go ahead and 
have the requirement implemented. 
IMO though this feature seems useful superficially but was not able to 
understand what would user benefit out of this feature by viewing only the 
queue apps and not take any decision? further on discussion with 
[~bibinchundatt], he mentioned that other apps cannot control apart from admin 
who else can view their app which sounded like a security limitation for me.
Hence if you see a pressing use case please share else i am +0 on this.
But anyway thanks for raising this topic and discussion.

> Implement a new access type for queue
> -
>
> Key: YARN-6842
> URL: https://issues.apache.org/jira/browse/YARN-6842
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: scheduler
>Affects Versions: 2.8.2
>Reporter: YunFan Zhou
>Assignee: YunFan Zhou
> Attachments: YARN-6842.001.patch, YARN-6842.002.patch, 
> YARN-6842.003.patch
>
>
> When we want to access applications of a queue,  only we can do is become the 
> administer of the queue at present.
> But sometimes we only want  authorize someone view applications of a queue 
> but not modify operation.
> In our current mechanism there isn't any way to meet it, so I will implement 
> a new access type for queue to solve
> this problem.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6927) Add support for individual resource types requests in MapReduce

2017-08-01 Thread Daniel Templeton (JIRA)
Daniel Templeton created YARN-6927:
--

 Summary: Add support for individual resource types requests in 
MapReduce
 Key: YARN-6927
 URL: https://issues.apache.org/jira/browse/YARN-6927
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager
Affects Versions: YARN-3926
Reporter: Daniel Templeton


YARN-6504 adds support for resource profiles in MapReduce jobs, but resource 
profiles don't give users much flexibility in their resource requests.  To 
satisfy users' needs, MapReduce should also allow users to specify arbitrary 
resource requests.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6593) [API] Introduce Placement Constraint object

2017-08-01 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16110036#comment-16110036
 ] 

Jian He commented on YARN-6593:
---

bq. Examples are essential, but can that be part of a followup JIRA? 
Particularly since the implementation(s) may affect the API.
sure, no problem with that.

Actually, what's being done in YARN-6594 may also affect this jira's 
implementation:  
Regarding container tags, IMO, there are two types of tags.
1) Ones for scheduling decisions which is done by this umberlla jira. (This is 
modeled as single value in current patch )
2) Ones for annotating metaInfo for container, which this umberlla jira 
originally didn't cover.

Right now 1) is modeled  as a single value.  For 2), I expect it to be a 
key/value pair to support various use-case. 
Question is whether we should model both as key/value pairs for consistency. I 
checked with Wangda, we prefer it to be the same for consistency user 
experience. Btw, Kubernetes also models both as key/value pairs.



> [API] Introduce Placement Constraint object
> ---
>
> Key: YARN-6593
> URL: https://issues.apache.org/jira/browse/YARN-6593
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
> Attachments: YARN-6593.001.patch, YARN-6593.002.patch, 
> YARN-6593.003.patch, YARN-6593.004.patch, YARN-6593.005.patch, 
> YARN-6593.006.patch, YARN-6593.007.patch, YARN-6593.008.patch
>
>
> Just removed Fixed version and moved it to target version as we set fix 
> version only after patch is committed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6704) Add Federation Interceptor restart when work preserving NM is enabled

2017-08-01 Thread Subru Krishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6704?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subru Krishnan updated YARN-6704:
-
Parent Issue: YARN-5597  (was: YARN-2915)

> Add Federation Interceptor restart when work preserving NM is enabled
> -
>
> Key: YARN-6704
> URL: https://issues.apache.org/jira/browse/YARN-6704
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Botong Huang
>Assignee: Botong Huang
> Attachments: YARN-6704-YARN-2915.v1.patch, 
> YARN-6704-YARN-2915.v2.patch
>
>
> YARN-1336 added the ability to restart NM without loosing any running 
> containers. {{AMRMProxy}} restart is added in YARN-6127. In a Federated YARN 
> environment, there's additional state in the {{FederationInterceptor}} to 
> allow for spanning across multiple sub-clusters, so we need to enhance 
> {{FederationInterceptor}} to support work-preserving restart.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4972) Cleanup ContainerScheduler tests to remove long sleep times

2017-08-01 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16110032#comment-16110032
 ] 

Haibo Chen commented on YARN-4972:
--

FYI, I have made changes in YARN-6675 to get rid of the long sleep. Will post 
patch there soon.

> Cleanup ContainerScheduler tests to remove long sleep times
> ---
>
> Key: YARN-4972
> URL: https://issues.apache.org/jira/browse/YARN-4972
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Arun Suresh
>Assignee: Arun Suresh
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6920) fix TestNMClient failure due to YARN-6706

2017-08-01 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-6920:
--
Target Version/s: 2.9.0, 3.0.0-beta1

> fix TestNMClient failure due to YARN-6706
> -
>
> Key: YARN-6920
> URL: https://issues.apache.org/jira/browse/YARN-6920
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Arun Suresh
>Assignee: Haibo Chen
> Attachments: YARN-6920.001.patch
>
>
> Looks like {{TestNMClient}} has been failing for a while. Opening this JIRA 
> to track the fix.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6853) Add MySql Scripts for FederationStateStore

2017-08-01 Thread Carlo Curino (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16110031#comment-16110031
 ] 

Carlo Curino commented on YARN-6853:


Thanks [~giovanni.fumarola] for the contribution, I just committed this to 
branch YARN-2915.

> Add MySql Scripts for FederationStateStore
> --
>
> Key: YARN-6853
> URL: https://issues.apache.org/jira/browse/YARN-6853
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Giovanni Matteo Fumarola
>Assignee: Giovanni Matteo Fumarola
> Fix For: YARN-2915
>
> Attachments: YARN-6853.v1.patch, YARN-6853-YARN-2915.v1.patch, 
> YARN-6853-YARN-2915.v2.patch, YARN-6853-YARN-2915.v3.patch, 
> YARN-6853-YARN-2915.v4.patch, YARN-6853-YARN-2915.v5.patch
>
>
> In YARN-3663 we added the SQL scripts for SQLServer. We want to add the MySql 
> scripts to be able to run Federation with a MySQL servers which will be less 
> performant but convenient.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6668) Use cgroup to get container resource utilization

2017-08-01 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6668?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16110029#comment-16110029
 ] 

ASF GitHub Bot commented on YARN-6668:
--

Github user szegedim commented on a diff in the pull request:

https://github.com/apache/hadoop/pull/227#discussion_r130760729
  
--- Diff: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/NodeManager.java
 ---
@@ -60,6 +60,7 @@
 import org.apache.hadoop.yarn.server.api.records.NodeHealthStatus;
 import 
org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManager;
 import 
org.apache.hadoop.yarn.server.nodemanager.collectormanager.NMCollectorService;
+import org.apache.hadoop.yarn.server.api.records.OverAllocationInfo;
--- End diff --

Indeed. Thank you for the comment, I will fix this in the patch here: 
https://issues.apache.org/jira/browse/YARN-6668


> Use cgroup to get container resource utilization
> 
>
> Key: YARN-6668
> URL: https://issues.apache.org/jira/browse/YARN-6668
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Affects Versions: 3.0.0-alpha3
>Reporter: Haibo Chen
>Assignee: Miklos Szegedi
> Attachments: YARN-6668.000.patch, YARN-6668.001.patch
>
>
> Container Monitor relies on proc file system to get container resource 
> utilization, which is not as efficient as reading cgroup accounting. We 
> should in NM, when cgroup is enabled, read cgroup stats instead. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6920) fix TestNMClient failure due to YARN-6706

2017-08-01 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-6920:
--
Fix Version/s: 3.0.0-beta1
   2.9.0

> fix TestNMClient failure due to YARN-6706
> -
>
> Key: YARN-6920
> URL: https://issues.apache.org/jira/browse/YARN-6920
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Arun Suresh
>Assignee: Haibo Chen
> Attachments: YARN-6920.001.patch
>
>
> Looks like {{TestNMClient}} has been failing for a while. Opening this JIRA 
> to track the fix.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6920) fix TestNMClient failure due to YARN-6706

2017-08-01 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-6920:
--
Fix Version/s: (was: 3.0.0-beta1)
   (was: 2.9.0)

> fix TestNMClient failure due to YARN-6706
> -
>
> Key: YARN-6920
> URL: https://issues.apache.org/jira/browse/YARN-6920
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Arun Suresh
>Assignee: Haibo Chen
> Attachments: YARN-6920.001.patch
>
>
> Looks like {{TestNMClient}} has been failing for a while. Opening this JIRA 
> to track the fix.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6920) fix TestNMClient failure due to YARN-6706

2017-08-01 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-6920:
--
Attachment: YARN-6920.001.patch

> fix TestNMClient failure due to YARN-6706
> -
>
> Key: YARN-6920
> URL: https://issues.apache.org/jira/browse/YARN-6920
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Arun Suresh
>Assignee: Haibo Chen
> Attachments: YARN-6920.001.patch
>
>
> Looks like {{TestNMClient}} has been failing for a while. Opening this JIRA 
> to track the fix.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6876) Create an abstract log writer for extendability

2017-08-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16110024#comment-16110024
 ] 

Hadoop QA commented on YARN-6876:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
58s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
51s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 in trunk has 5 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
59s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
17s{color} | {color:green} hadoop-yarn-project_hadoop-yarn generated 0 new + 
127 unchanged - 7 fixed = 127 total (was 134) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m  0s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 14 new + 517 unchanged - 10 fixed = 531 total (was 527) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
14s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
23s{color} | {color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager
 generated 14 new + 105 unchanged - 0 fixed = 119 total (was 105) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
32s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 32s{color} 
| {color:red} hadoop-yarn-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 24s{color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 26s{color} 
| {color:red} hadoop-yarn-client in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 66m 26s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6876 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12879925/YARN-6876-trunk.003.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux 04429fbf7c1d 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 

[jira] [Commented] (YARN-65) Reduce RM app memory footprint once app has completed

2017-08-01 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-65?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16110021#comment-16110021
 ] 

Naganarasimha G R commented on YARN-65:
---

Thanks [~maniraj...@gmail.com], for the latest patch.
Few nits :
# environment,Commands, LocalResources,ServiceData : for all these fields it 
would be better to get the reference and clear rather than creating a new 
object.
# Encapsulating these modifications into a method is better for readability and 
reuse in future
# Please add some test cases for the same.

> Reduce RM app memory footprint once app has completed
> -
>
> Key: YARN-65
> URL: https://issues.apache.org/jira/browse/YARN-65
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Affects Versions: 0.23.3
>Reporter: Jason Lowe
>Assignee: Manikandan R
> Attachments: YARN-65.001.patch, YARN-65.002.patch, YARN-65.003.patch
>
>
> The ResourceManager holds onto a configurable number of completed 
> applications (yarn.resource.max-completed-applications, defaults to 1), 
> and the memory footprint of these completed applications can be significant.  
> For example, the {{submissionContext}} in RMAppImpl contains references to 
> protocolbuffer objects and other items that probably aren't necessary to keep 
> around once the application has completed.  We could significantly reduce the 
> memory footprint of the RM by releasing objects that are no longer necessary 
> once an application completes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6920) fix TestNMClient failure due to YARN-6706

2017-08-01 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16110009#comment-16110009
 ] 

Arun Suresh commented on YARN-6920:
---

Actually, let me take over this [~haibochen], if you don't mind.
On further investigation, it looks like the actual issue is due to the fact 
that during container re-initialization, the container resources which should 
have been reclaimed by the ContainerScheduler before re-launching the 
re-initialized container, was never re-claimed - which resulted in a resource 
leak.
This was not happening earlier either, but due to the fact that prior to 
YARN-6706, if maxOppQueueLength == 0, we never even used to perform a resource 
availability check, the ContainerManager test-cases used to pass :)
Will post the fix shortly along with some additional assertions.  

> fix TestNMClient failure due to YARN-6706
> -
>
> Key: YARN-6920
> URL: https://issues.apache.org/jira/browse/YARN-6920
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Arun Suresh
>Assignee: Haibo Chen
>
> Looks like {{TestNMClient}} has been failing for a while. Opening this JIRA 
> to track the fix.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-6924) Metrics for Federation AMRMProxy

2017-08-01 Thread Botong Huang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Botong Huang reassigned YARN-6924:
--

Assignee: Botong Huang

> Metrics for Federation AMRMProxy
> 
>
> Key: YARN-6924
> URL: https://issues.apache.org/jira/browse/YARN-6924
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Giovanni Matteo Fumarola
>Assignee: Botong Huang
>
> This JIRA proposes addition of metrics for Federation AMRMProxy



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6895) [FairScheduler] Preemption reservation may cause regular reservation leaks

2017-08-01 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16109986#comment-16109986
 ] 

Miklos Szegedi commented on YARN-6895:
--

I opened YARN-6925 and YARN-6926. Thank you!

> [FairScheduler] Preemption reservation may cause regular reservation leaks
> --
>
> Key: YARN-6895
> URL: https://issues.apache.org/jira/browse/YARN-6895
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 3.0.0-alpha4
>Reporter: Miklos Szegedi
>Assignee: Miklos Szegedi
>Priority: Blocker
> Attachments: YARN-6895.000.patch, YARN-6895.001.patch
>
>
> We found a limitation in the implementation of YARN-6432. If the container 
> released is smaller than the preemption request, a node reservation is 
> created that is never deleted.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6926) FSSchedulerNode reservation conflict

2017-08-01 Thread Miklos Szegedi (JIRA)
Miklos Szegedi created YARN-6926:


 Summary: FSSchedulerNode reservation conflict
 Key: YARN-6926
 URL: https://issues.apache.org/jira/browse/YARN-6926
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: fairscheduler
Reporter: Miklos Szegedi
Assignee: Yufei Gu


FSSchedulerNode reserves space for preemptor apps, but other nodes may reserve 
normally, if there is not enough free space. This causes double accounting and 
reservation.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6924) Metrics for Federation AMRMProxy

2017-08-01 Thread Giovanni Matteo Fumarola (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giovanni Matteo Fumarola updated YARN-6924:
---
Issue Type: Sub-task  (was: New Feature)
Parent: YARN-5597

> Metrics for Federation AMRMProxy
> 
>
> Key: YARN-6924
> URL: https://issues.apache.org/jira/browse/YARN-6924
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Giovanni Matteo Fumarola
>
> This JIRA proposes addition of metrics for Federation AMRMProxy



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6925) FSSchedulerNode could be simplified extracting preemption fields into a class

2017-08-01 Thread Miklos Szegedi (JIRA)
Miklos Szegedi created YARN-6925:


 Summary: FSSchedulerNode could be simplified extracting preemption 
fields into a class
 Key: YARN-6925
 URL: https://issues.apache.org/jira/browse/YARN-6925
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: fairscheduler
Reporter: Miklos Szegedi
Assignee: Yufei Gu
Priority: Minor






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6924) Metrics for Federation AMRMProxy

2017-08-01 Thread Giovanni Matteo Fumarola (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giovanni Matteo Fumarola updated YARN-6924:
---
Issue Type: New Feature  (was: Bug)

> Metrics for Federation AMRMProxy
> 
>
> Key: YARN-6924
> URL: https://issues.apache.org/jira/browse/YARN-6924
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Giovanni Matteo Fumarola
>
> This JIRA proposes addition of metrics for Federation AMRMProxy



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6924) Metrics for Federation AMRMProxy

2017-08-01 Thread Giovanni Matteo Fumarola (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giovanni Matteo Fumarola updated YARN-6924:
---
Description: This JIRA proposes addition of metrics for Federation AMRMProxy

> Metrics for Federation AMRMProxy
> 
>
> Key: YARN-6924
> URL: https://issues.apache.org/jira/browse/YARN-6924
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Giovanni Matteo Fumarola
>
> This JIRA proposes addition of metrics for Federation AMRMProxy



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-6923) Metrics for Federation Router

2017-08-01 Thread Giovanni Matteo Fumarola (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6923?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giovanni Matteo Fumarola reassigned YARN-6923:
--

Assignee: Giovanni Matteo Fumarola

> Metrics for Federation Router
> -
>
> Key: YARN-6923
> URL: https://issues.apache.org/jira/browse/YARN-6923
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Giovanni Matteo Fumarola
>Assignee: Giovanni Matteo Fumarola
>
> This JIRA proposes addition of metrics for Federation StateStore



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6895) [FairScheduler] Preemption reservation may cause regular reservation leaks

2017-08-01 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16109979#comment-16109979
 ] 

Yufei Gu commented on YARN-6895:


Can you create followup JIRAs for my question and suggestion? Otherwise looks 
good to me. 

> [FairScheduler] Preemption reservation may cause regular reservation leaks
> --
>
> Key: YARN-6895
> URL: https://issues.apache.org/jira/browse/YARN-6895
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 3.0.0-alpha4
>Reporter: Miklos Szegedi
>Assignee: Miklos Szegedi
>Priority: Blocker
> Attachments: YARN-6895.000.patch, YARN-6895.001.patch
>
>
> We found a limitation in the implementation of YARN-6432. If the container 
> released is smaller than the preemption request, a node reservation is 
> created that is never deleted.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6924) Metrics for Federation AMRMProxy

2017-08-01 Thread Giovanni Matteo Fumarola (JIRA)
Giovanni Matteo Fumarola created YARN-6924:
--

 Summary: Metrics for Federation AMRMProxy
 Key: YARN-6924
 URL: https://issues.apache.org/jira/browse/YARN-6924
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Giovanni Matteo Fumarola






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6923) Metrics for Federation Router

2017-08-01 Thread Giovanni Matteo Fumarola (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6923?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giovanni Matteo Fumarola updated YARN-6923:
---
Description: This JIRA proposes addition of metrics for Federation 
StateStore

> Metrics for Federation Router
> -
>
> Key: YARN-6923
> URL: https://issues.apache.org/jira/browse/YARN-6923
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Giovanni Matteo Fumarola
>
> This JIRA proposes addition of metrics for Federation StateStore



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6923) Metrics for Federation Router

2017-08-01 Thread Giovanni Matteo Fumarola (JIRA)
Giovanni Matteo Fumarola created YARN-6923:
--

 Summary: Metrics for Federation Router
 Key: YARN-6923
 URL: https://issues.apache.org/jira/browse/YARN-6923
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Giovanni Matteo Fumarola






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5603) Metrics for Federation StateStore

2017-08-01 Thread Giovanni Matteo Fumarola (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5603?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giovanni Matteo Fumarola updated YARN-5603:
---
Summary: Metrics for Federation StateStore  (was: Metrics for Federation 
entities like StateStore/Router/AMRMProxy)

> Metrics for Federation StateStore
> -
>
> Key: YARN-5603
> URL: https://issues.apache.org/jira/browse/YARN-5603
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Subru Krishnan
>Assignee: Giovanni Matteo Fumarola
>
> This JIRA proposes addition of metrics for Federation entities like 
> StateStore/Router/AMRMProxy etc



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6895) [FairScheduler] Preemption reservation may cause regular reservation leaks

2017-08-01 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16109978#comment-16109978
 ] 

Miklos Szegedi commented on YARN-6895:
--

The failing unit tests are not related to the change.

> [FairScheduler] Preemption reservation may cause regular reservation leaks
> --
>
> Key: YARN-6895
> URL: https://issues.apache.org/jira/browse/YARN-6895
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 3.0.0-alpha4
>Reporter: Miklos Szegedi
>Assignee: Miklos Szegedi
>Priority: Blocker
> Attachments: YARN-6895.000.patch, YARN-6895.001.patch
>
>
> We found a limitation in the implementation of YARN-6432. If the container 
> released is smaller than the preemption request, a node reservation is 
> created that is never deleted.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5603) Metrics for Federation StateStore

2017-08-01 Thread Giovanni Matteo Fumarola (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5603?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giovanni Matteo Fumarola updated YARN-5603:
---
Description: This JIRA proposes addition of metrics for Federation 
StateStore  (was: This JIRA proposes addition of metrics for Federation 
entities like StateStore/Router/AMRMProxy etc)

> Metrics for Federation StateStore
> -
>
> Key: YARN-5603
> URL: https://issues.apache.org/jira/browse/YARN-5603
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Subru Krishnan
>Assignee: Giovanni Matteo Fumarola
>
> This JIRA proposes addition of metrics for Federation StateStore



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6593) [API] Introduce Placement Constraint object

2017-08-01 Thread Chris Douglas (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16109976#comment-16109976
 ] 

Chris Douglas commented on YARN-6593:
-

bq. The only thing remaining in this jira is the example class for how to use 
the APIs - whether it's worth to do or not ?
Examples are essential, but can that be part of a followup JIRA? Particularly 
since the implementation(s) may affect the API.

> [API] Introduce Placement Constraint object
> ---
>
> Key: YARN-6593
> URL: https://issues.apache.org/jira/browse/YARN-6593
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
> Attachments: YARN-6593.001.patch, YARN-6593.002.patch, 
> YARN-6593.003.patch, YARN-6593.004.patch, YARN-6593.005.patch, 
> YARN-6593.006.patch, YARN-6593.007.patch, YARN-6593.008.patch
>
>
> Just removed Fixed version and moved it to target version as we set fix 
> version only after patch is committed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6922) Findbugs warning in YARN NodeManager

2017-08-01 Thread Xuan Gong (JIRA)
Xuan Gong created YARN-6922:
---

 Summary: Findbugs warning in YARN NodeManager
 Key: YARN-6922
 URL: https://issues.apache.org/jira/browse/YARN-6922
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 3.0.0-beta1
Reporter: Xuan Gong


Several findbugs warning in YARN NodeManager package.
{code}
org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainerMetrics.usageMetrics
 is a mutable collection which should be package protected
Bug type MS_MUTABLE_COLLECTION_PKGPROTECT (click for details) 
In class 
org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainerMetrics
Field 
org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainerMetrics.usageMetrics
At ContainerMetrics.java:[line 134]
{code}

{code}

org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.createStatus()
 makes inefficient use of keySet iterator instead of entrySet iterator
Bug type WMI_WRONG_MAP_ITERATOR (click for details) 
In class 
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer
In method 
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.createStatus()
Field 
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.pendingResources
At ContainerLocalizer.java:[line 357]
{code}

{code}

org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.removeVeryOldStoppedContainersFromCache()
 makes inefficient use of keySet iterator instead of entrySet iterator
Bug type WMI_WRONG_MAP_ITERATOR (click for details) 
In class org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl
In method 
org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.removeVeryOldStoppedContainersFromCache()
Field 
org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.recentlyStoppedContainers
At NodeStatusUpdaterImpl.java:[line 719]
{code}

{code}
Hard coded reference to an absolute pathname in 
org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.DockerLinuxContainerRuntime.launchContainer(ContainerRuntimeContext)
Bug type DMI_HARDCODED_ABSOLUTE_FILENAME (click for details) 
In class 
org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.DockerLinuxContainerRuntime
In method 
org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.DockerLinuxContainerRuntime.launchContainer(ContainerRuntimeContext)
File name /sys/fs/cgroup
At DockerLinuxContainerRuntime.java:[line 490]
{code}

{code}
Useless object stored in variable removedNullContainers of method 
org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.removeOrTrackCompletedContainersFromContext(List)
Bug type UC_USELESS_OBJECT (click for details) 
In class org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl
In method 
org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.removeOrTrackCompletedContainersFromContext(List)
Value removedNullContainers
Type java.util.HashSet
At NodeStatusUpdaterImpl.java:[line 642]
{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6901) A CapacityScheduler app->LeafQueue deadlock found in branch-2.8

2017-08-01 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6901?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16109972#comment-16109972
 ] 

Wangda Tan commented on YARN-6901:
--

bq. I agree it would be nice if getQueuePath were lockless, although long-term 
I think something like YARN-6917 would be preferable to a volatile approach. 
I'm OK with volatile in the short-term.
If there's no existing issue we can find, I agree to delay the fix to 
YARN-6917. 

bq. That does not seem to have anything to do with reaching up the hierarchy.
Inside AbstractContainerAllocator:
{code}
application
.getCSLeafQueue()
.getOrderingPolicy()
{code}
To me grabbing lock of leaf queue while holding lock of app is also not a good 
practice. 

bq. It also looks like it introduces a bug if two threads try to call 
setOrderingPolicy at the same time 
Actually orderingPolicy can be set only when queue do reinitialize (which 
protected by queue's synchronize lock). Do you think does it still have the 
issue in the case?  

> A CapacityScheduler app->LeafQueue deadlock found in branch-2.8 
> 
>
> Key: YARN-6901
> URL: https://issues.apache.org/jira/browse/YARN-6901
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.8.0
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>Priority: Critical
> Attachments: YARN-6901.branch-2.8.001.patch
>
>
> Stacktrace:
> {code}
> Thread 22068: (state = BLOCKED)
>  - 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.AbstractCSQueue.getParent()
>  @bci=0, line=185 (Compiled frame)
>  - 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue.getQueuePath()
>  @bci=8, line=262 (Compiled frame)
>  - 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.allocator.AbstractContainerAllocator.getCSAssignmentFromAllocateResult(org.apache.hadoop.yarn.api.records.Resource,
>  
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.allocator.ContainerAllocation,
>  org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainer) 
> @bci=183, line=80 (Compiled frame)
>  - 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.allocator.RegularContainerAllocator.assignContainers(org.apache.hadoop.yarn.api.records.Resource,
>  
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerNode,
>  
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.SchedulingMode,
>  org.apache.hadoop.yarn.server.resourcemanager.scheduler.ResourceLimits, 
> org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainer) 
> @bci=204, line=747 (Compiled frame)
>  - 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.allocator.ContainerAllocator.assignContainers(org.apache.hadoop.yarn.api.records.Resource,
>  
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerNode,
>  
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.SchedulingMode,
>  org.apache.hadoop.yarn.server.resourcemanager.scheduler.ResourceLimits, 
> org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainer) 
> @bci=16, line=49 (Compiled frame)
>  - 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp.assignContainers(org.apache.hadoop.yarn.api.records.Resource,
>  
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerNode,
>  org.apache.hadoop.yarn.server.resourcemanager.scheduler.ResourceLimits, 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.SchedulingMode,
>  org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainer) 
> @bci=61, line=468 (Compiled frame)
>  - 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue.assignContainers(org.apache.hadoop.yarn.api.records.Resource,
>  
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerNode,
>  org.apache.hadoop.yarn.server.resourcemanager.scheduler.ResourceLimits, 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.SchedulingMode)
>  @bci=148, line=876 (Compiled frame)
>  - 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.allocateContainersToNode(org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerNode)
>  @bci=157, line=1149 (Compiled frame)
>  - 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.handle(org.apache.hadoop.yarn.server.resourcemanager.scheduler.event.SchedulerEvent)
>  @bci=266, line=1277 (Compiled frame)
> 
>  Thread 22124: (state = BLOCKED)
>  - 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerApplicationAttempt.getReservedContainers()
>  @bci=0, line=336 (Compiled 

[jira] [Commented] (YARN-6593) [API] Introduce Placement Constraint object

2017-08-01 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16109961#comment-16109961
 ] 

Jian He commented on YARN-6593:
---

Let's move the discussion there. 
The only thing remaining in this jira is the example class for how to use the 
APIs  - whether it's worth to do or not ?

> [API] Introduce Placement Constraint object
> ---
>
> Key: YARN-6593
> URL: https://issues.apache.org/jira/browse/YARN-6593
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
> Attachments: YARN-6593.001.patch, YARN-6593.002.patch, 
> YARN-6593.003.patch, YARN-6593.004.patch, YARN-6593.005.patch, 
> YARN-6593.006.patch, YARN-6593.007.patch, YARN-6593.008.patch
>
>
> Just removed Fixed version and moved it to target version as we set fix 
> version only after patch is committed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6876) Create an abstract log writer for extendability

2017-08-01 Thread Xuan Gong (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6876?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuan Gong updated YARN-6876:

Attachment: YARN-6876-trunk.003.patch

> Create an abstract log writer for extendability
> ---
>
> Key: YARN-6876
> URL: https://issues.apache.org/jira/browse/YARN-6876
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Attachments: YARN-6876-branch-2.001.patch, YARN-6876-trunk.001.patch, 
> YARN-6876-trunk.002.patch, YARN-6876-trunk.003.patch
>
>
> Currently, TFile log writer is used to aggregate log in YARN. We need to add 
> an abstract layer, and pick up the correct log writer based on the 
> configuration.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6895) [FairScheduler] Preemption reservation may cause regular reservation leaks

2017-08-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16109959#comment-16109959
 ] 

Hadoop QA commented on YARN-6895:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 0 new + 39 unchanged - 4 fixed = 39 total (was 43) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 47m 28s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 72m 10s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.resourcemanager.TestRMEmbeddedElector 
|
| Timed out junit tests | 
org.apache.hadoop.yarn.server.resourcemanager.TestReservationSystemWithRMHA |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6895 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12879915/YARN-6895.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 4877068cd193 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 778d4ed |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/16654/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/16654/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/16654/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> [FairScheduler] Preemption 

[jira] [Commented] (YARN-3254) HealthReport should include disk full information

2017-08-01 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3254?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16109957#comment-16109957
 ] 

Wangda Tan commented on YARN-3254:
--

I think we need to clearly mention that in the jmx info: usable space is below 
some percentage or no more usable space. 

> HealthReport should include disk full information
> -
>
> Key: YARN-3254
> URL: https://issues.apache.org/jira/browse/YARN-3254
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Affects Versions: 2.6.0
>Reporter: Akira Ajisaka
>Assignee: Suma Shivaprasad
> Fix For: 3.0.0-beta1
>
> Attachments: Screen Shot 2015-02-24 at 17.57.39.png, Screen Shot 
> 2015-02-25 at 14.38.10.png, YARN-3254-001.patch, YARN-3254-002.patch, 
> YARN-3254-003.patch, YARN-3254-004.patch
>
>
> When a NodeManager's local disk gets almost full, the NodeManager sends a 
> health report to ResourceManager that "local/log dir is bad" and the message 
> is displayed on ResourceManager Web UI. It's difficult for users to detect 
> why the dir is bad.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6550) Capture launch_container.sh logs

2017-08-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6550?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16109953#comment-16109953
 ] 

Hadoop QA commented on YARN-6550:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
59s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 in trunk has 5 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 37s{color} 
| {color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager
 generated 2 new + 20 unchanged - 2 fixed = 22 total (was 22) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 20s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:
 The patch generated 31 new + 117 unchanged - 1 fixed = 148 total (was 118) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
4s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 generated 4 new + 5 unchanged - 0 fixed = 9 total (was 5) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
24s{color} | {color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager
 generated 4 new + 105 unchanged - 0 fixed = 109 total (was 105) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 14m  
0s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 39m 29s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | 
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
|  |  Unread field:field be static?  At ContainerLaunch.java:[line 863] |
|  |  Unread field:field be static?  At ContainerLaunch.java:[line 862] |
|  |  Format-string method String.format(String, Object[]) called with format 
string "@%s symlink "%s" "%s"" wants 3 arguments but is given 4 in 
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch$WindowsShellScriptBuilder.link(Path,
 Path)  At ContainerLaunch.java:with format string "@%s symlink "%s" "%s"" 
wants 3 arguments but is given 4 in 
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch$WindowsShellScriptBuilder.link(Path,
 Path)  At ContainerLaunch.java:[line 1144] |
|  |  Format-string method String.format(String, Object[]) called with format 
string "@if not exist "%s" mkdir "%s"" wants 2 arguments but is given 3 in 

[jira] [Commented] (YARN-6903) Yarn-native-service framework core rewrite

2017-08-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6903?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16109931#comment-16109931
 ] 

Hadoop QA commented on YARN-6903:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 60 new or modified test 
files. {color} |
|| || || || {color:brown} yarn-native-services Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  3m  
7s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
59s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
28s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
34s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  6m 
42s{color} | {color:green} yarn-native-services passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-slider 
{color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
28s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
57s{color} | {color:green} yarn-native-services passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
 3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  5m 
46s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  5m 46s{color} 
| {color:red} hadoop-yarn-project_hadoop-yarn generated 3 new + 133 unchanged - 
5 fixed = 136 total (was 138) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 35s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 517 new + 1522 unchanged - 403 fixed = 2039 total (was 1925) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  6m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
22s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m 
11s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 26 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
1s{color} | {color:red} The patch 1 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
5s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-slider 
{color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
3s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-slider/hadoop-yarn-slider-core
 generated 5 new + 0 unchanged - 0 fixed = 5 total (was 0) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  1m 
35s{color} | {color:red} hadoop-yarn in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
20s{color} | {color:red} hadoop-yarn-slider in the patch failed. 

[jira] [Comment Edited] (YARN-6593) [API] Introduce Placement Constraint object

2017-08-01 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16109921#comment-16109921
 ] 

Arun Suresh edited comment on YARN-6593 at 8/1/17 10:38 PM:


bq. Actually, my comments should apply to YARN-6594 rather this jira.
[~jianhe], I am guessing you are referring to the *AllocationTags* .. then yes, 
we should continue that discussion in YARN-6594, since this pertains just to 
the *PlacementConstraint* object. (Can you please cross-post it there as well 
?)  

My opinion though is that the allocation tags should remain a list of string 
keys. I understand having it as a map might offload some work from the AM, but 
do you see it helping the RM/scheduler in making a scheduling decision ? If it 
is just to select a bunch of containers, one can still perform a search like: 
select containers where allocationtags contains "env=test" etc. Forcing the 
tags to be a map might even make the querying more complicated.


was (Author: asuresh):
bq. Actually, my comments should apply to YARN-6594 rather this jira.
[~jianhe], I am guessing you are referring to the *AllocationTags* .. then yes, 
we should continue that discussion in YARN-6594, since this pertains just to 
the *PlacementConstraint* object. (Can you please cross-post it there as well 
?)  

My opinion though is that the allocation tags should remain a list of string 
keys. I understand this might offload some work from the AM, but do you see it 
helping the RM/scheduler in making a scheduling decision ? If it is just to 
select a bunch of containers, one can still perform a search like: select 
containers where allocationtags contains "env=test" etc. Forcing the tags to be 
a map might even make the querying more complicated.

> [API] Introduce Placement Constraint object
> ---
>
> Key: YARN-6593
> URL: https://issues.apache.org/jira/browse/YARN-6593
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
> Attachments: YARN-6593.001.patch, YARN-6593.002.patch, 
> YARN-6593.003.patch, YARN-6593.004.patch, YARN-6593.005.patch, 
> YARN-6593.006.patch, YARN-6593.007.patch, YARN-6593.008.patch
>
>
> Just removed Fixed version and moved it to target version as we set fix 
> version only after patch is committed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6593) [API] Introduce Placement Constraint object

2017-08-01 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16109921#comment-16109921
 ] 

Arun Suresh commented on YARN-6593:
---

bq. Actually, my comments should apply to YARN-6594 rather this jira.
[~jianhe], I am guessing you are referring to the *AllocationTags* .. then yes, 
we should continue that discussion in YARN-6594, since this pertains just to 
the *PlacementConstraint* object. (Can you please cross-post it there as well 
?)  

My opinion though is that the allocation tags should remain a list of string 
keys. I understand this might offload some work from the AM, but do you see it 
helping the RM/scheduler in making a scheduling decision ? If it is just to 
select a bunch of containers, one can still perform a search like: select 
containers where allocationtags contains "env=test" etc. Forcing the tags to be 
a map might even make the querying more complicated.

> [API] Introduce Placement Constraint object
> ---
>
> Key: YARN-6593
> URL: https://issues.apache.org/jira/browse/YARN-6593
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
> Attachments: YARN-6593.001.patch, YARN-6593.002.patch, 
> YARN-6593.003.patch, YARN-6593.004.patch, YARN-6593.005.patch, 
> YARN-6593.006.patch, YARN-6593.007.patch, YARN-6593.008.patch
>
>
> Just removed Fixed version and moved it to target version as we set fix 
> version only after patch is committed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6846) Nodemanager can fail to fully delete application local directories when applications are killed

2017-08-01 Thread Eric Badger (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6846?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16109918#comment-16109918
 ] 

Eric Badger commented on YARN-6846:
---

bq. If I'm reading the man pages correctly for geteuid(), seteuid(), and 
readdir(), they don't generate ENOENT
For {{geteuid()}} and {{seteuid()}}, these aren't the methods that are setting 
{{errno}} in the code change in the first block referenced (1837). 
{noformat}
-if (rmdir(path) != 0) {
+if (rmdir(path) != 0 && errno != ENOENT) {
{noformat}
{{rmdir(path)}} is what sets {{errno}} here and can return {{ENOENT}}. 

As far as {{readdir()}} goes, it looks like posix has it returning {{ENOENT}}, 
while Linux doesn't. I think it's better to go with Posix here, but I'll refer 
to [~jlowe] on that.
http://pubs.opengroup.org/onlinepubs/9699919799/functions/readdir.html
http://man7.org/linux/man-pages/man3/readdir.3.html

> Nodemanager can fail to fully delete application local directories when 
> applications are killed
> ---
>
> Key: YARN-6846
> URL: https://issues.apache.org/jira/browse/YARN-6846
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.8.1
>Reporter: Jason Lowe
>Assignee: Jason Lowe
>Priority: Critical
> Attachments: YARN-6846.001.patch, YARN-6846.002.patch, 
> YARN-6846.003.patch
>
>
> When an application is killed all of the running containers are killed and 
> the app waits for the containers to complete before cleaning up.  As each 
> container completes the container directory is deleted via the 
> DeletionService.  After all containers have completed the app completes and 
> the app directory is deleted.  If the app completes quickly enough then the 
> deletion of the container and app directories can race against each other.  
> If the container deletion executor deletes a file just before the application 
> deletion executor then it can cause the application deletion executor to 
> fail, leaving the remaining entries in the application directory lingering.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6788) Improve performance of resource profile branch

2017-08-01 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16109913#comment-16109913
 ] 

Daniel Templeton commented on YARN-6788:


Thanks for the updated patch, [~sunilg].  Here are the issues I still see open:

* In {{Resources.addTo()}}, {{Resources.subtractFrom()}}, 
{{Resources.multiplyTo()}}, {{Resources.multiplyAndAddTo()}}, 
{{Resources.multiplyAndRoundDown()}}, {{Resources.fitsIn()}}, 
{{Resources.componentwiseMin()}}, and {{Resources.componentwiseMax()}}, the 
variable in the _foreach_ should be named {{lhsValue}} instead of calling it 
{{entry}} and then declaring a new variable for it called {{lhsValue}}.
* {quote}In DominantResourceCalculator(), we already invoke getResourceTypes. 
So i think its fine.{quote}  In {{ResourceUtils}} you're creating an API, and 
there's no reason some other part of the code wouldn't call 
{{getResourceNamesArray()}} in the future.  To prevent nasty surprises, the 
methods of your API should have consistent behavior.  If you're worried about 
performance cost of the initialization check, then you should take a different 
approach.  See my next point.
* The DCL still won't work as implemented.  DCL is hard to get right, and it's 
with good reason that it's suggested to follow a recipe for DCL with no 
modifications.  Instead, how about making the {{ResourceUtils}} class into a 
proper singleton?  You can then do the initialization in the {{getInstance()}} 
call and not have to worry about checking it in every API call.  Yeah, you'll 
still need DCL for the initialization, but with a singleton you'll be able to 
use a DCL recipe unaltered.

> Improve performance of resource profile branch
> --
>
> Key: YARN-6788
> URL: https://issues.apache.org/jira/browse/YARN-6788
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Sunil G
>Assignee: Sunil G
>Priority: Blocker
> Attachments: YARN-6788-YARN-3926.001.patch, 
> YARN-6788-YARN-3926.002.patch, YARN-6788-YARN-3926.003.patch, 
> YARN-6788-YARN-3926.004.patch, YARN-6788-YARN-3926.005.patch, 
> YARN-6788-YARN-3926.006.patch, YARN-6788-YARN-3926.007.patch, 
> YARN-6788-YARN-3926.008.patch, YARN-6788-YARN-3926.009.patch, 
> YARN-6788-YARN-3926.010.patch, YARN-6788-YARN-3926.011.patch, 
> YARN-6788-YARN-3926.012.patch, YARN-6788-YARN-3926.013.patch, 
> YARN-6788-YARN-3926.014.patch, YARN-6788-YARN-3926.015.patch, 
> YARN-6788-YARN-3926.016.patch, YARN-6788-YARN-3926.017.patch, 
> YARN-6788-YARN-3926.018.patch, YARN-6788-YARN-3926.019.patch
>
>
> Currently we could see a 15% performance delta with this branch. 
> Few performance improvements to improve the same.
> Also this patch will handle 
> [comments|https://issues.apache.org/jira/browse/YARN-6761?focusedCommentId=16075418=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16075418]
>  from [~leftnoteasy].



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6811) [ATS1.5] All history logs should be kept under its own User Directory.

2017-08-01 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16109903#comment-16109903
 ] 

Junping Du commented on YARN-6811:
--

Thanks [~rohithsharma] for contributing the patch! The approach here looks 
general good to me. The only concern here is it could be a bit performance 
impact as it will search two directories (with User and without user). I think 
one improve could we don't search user directory when "keep-under-user-dir" set 
to false. The verse is not true because we need to handle rolling upgrade case.

Some detail comments:

{noformat}
public static final String
+  TIMELINE_SERVICE_ENTITYGROUP_FS_STORE_KEEP_UNDER_USER_DIR =
+  TIMELINE_SERVICE_ENTITYGROUP_FS_STORE_PREFIX + "keep-under-user-dir"
{noformat}
The name of new added configuration is too long, can it simply be 
"with-user-dir"?

We should document the new configuration in yarn-default.xml with proper 
explanation of how this configuration is used for.

Like my comments offline, {{createUserDir(String user)}} should have a better 
name given it doesn't already create user dir (depends on configuration). May 
be better to call it {{getAppRootDir()}}?

We need to handle rolling upgrade case. I think we can add a unit test here as 
we can write app log with "keep-under-user-dir" = false for writing to old 
location, and try to read it out when set "keep-under-user-dir" = true.

> [ATS1.5]  All history logs should be kept under its own User Directory.
> ---
>
> Key: YARN-6811
> URL: https://issues.apache.org/jira/browse/YARN-6811
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: timelineclient, timelineserver
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
> Attachments: YARN-6811.01.patch
>
>
> ATS1.5 allows to store history data in underlying FileSystem folder path i.e 
> */acitve-dir* and */done-dir*. These base directories are protected for 
> unauthorized user access for other users data by setting sticky bit for 
> /active-dir. 
> But object store filesystems such as WASB does not have user access control 
> on folders and files. When WASB are used as underlying file system for 
> ATS1.5, the history data which are stored in FS are accessible to all users. 
> *This would be a security risk*
> I would propose to keep history data under its own user directory i.e 
> */active-dir/$USER*. Even this do not solve basic user access from FS, but it 
> provides capability to plugin Apache Ranger policies for each user folders. 
> One thing to note that setting policies to each user folder is admin 
> responsibility. But grouping all history data of one user folder allows to 
> set policies so that user access control is achieved. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4161) Capacity Scheduler : Assign single or multiple containers per heart beat driven by configuration

2017-08-01 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16109907#comment-16109907
 ] 

Wangda Tan commented on YARN-4161:
--

And reassigned to [~ywskycn]. 

> Capacity Scheduler : Assign single or multiple containers per heart beat 
> driven by configuration
> 
>
> Key: YARN-4161
> URL: https://issues.apache.org/jira/browse/YARN-4161
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: capacity scheduler
>Reporter: Mayank Bansal
>Assignee: Mayank Bansal
>  Labels: oct16-medium
> Attachments: YARN-4161.002.patch, YARN-4161.003.patch, 
> YARN-4161.004.patch, YARN-4161.005.patch, YARN-4161.006.patch, 
> YARN-4161.patch, YARN-4161.patch.1
>
>
> Capacity Scheduler right now schedules multiple containers per heart beat if 
> there are more resources available in the node.
> This approach works fine however in some cases its not distribute the load 
> across the cluster hence throughput of the cluster suffers. I am adding 
> feature to drive that using configuration by that we can control the number 
> of containers assigned per heart beat.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-4161) Capacity Scheduler : Assign single or multiple containers per heart beat driven by configuration

2017-08-01 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4161?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan reassigned YARN-4161:


Assignee: Wei Yan  (was: Mayank Bansal)

> Capacity Scheduler : Assign single or multiple containers per heart beat 
> driven by configuration
> 
>
> Key: YARN-4161
> URL: https://issues.apache.org/jira/browse/YARN-4161
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: capacity scheduler
>Reporter: Mayank Bansal
>Assignee: Wei Yan
>  Labels: oct16-medium
> Attachments: YARN-4161.002.patch, YARN-4161.003.patch, 
> YARN-4161.004.patch, YARN-4161.005.patch, YARN-4161.006.patch, 
> YARN-4161.patch, YARN-4161.patch.1
>
>
> Capacity Scheduler right now schedules multiple containers per heart beat if 
> there are more resources available in the node.
> This approach works fine however in some cases its not distribute the load 
> across the cluster hence throughput of the cluster suffers. I am adding 
> feature to drive that using configuration by that we can control the number 
> of containers assigned per heart beat.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4161) Capacity Scheduler : Assign single or multiple containers per heart beat driven by configuration

2017-08-01 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16109905#comment-16109905
 ] 

Wangda Tan commented on YARN-4161:
--

Latest patch looks good, +1, will commit tomorrow if nobody against.

> Capacity Scheduler : Assign single or multiple containers per heart beat 
> driven by configuration
> 
>
> Key: YARN-4161
> URL: https://issues.apache.org/jira/browse/YARN-4161
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: capacity scheduler
>Reporter: Mayank Bansal
>Assignee: Mayank Bansal
>  Labels: oct16-medium
> Attachments: YARN-4161.002.patch, YARN-4161.003.patch, 
> YARN-4161.004.patch, YARN-4161.005.patch, YARN-4161.006.patch, 
> YARN-4161.patch, YARN-4161.patch.1
>
>
> Capacity Scheduler right now schedules multiple containers per heart beat if 
> there are more resources available in the node.
> This approach works fine however in some cases its not distribute the load 
> across the cluster hence throughput of the cluster suffers. I am adding 
> feature to drive that using configuration by that we can control the number 
> of containers assigned per heart beat.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6853) Add MySql Scripts for FederationStateStore

2017-08-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16109886#comment-16109886
 ] 

Hadoop QA commented on YARN-6853:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} YARN-2915 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
35s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
44s{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m  
7s{color} | {color:green} YARN-2915 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 21m 39s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6853 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12879918/YARN-6853-YARN-2915.v5.patch
 |
| Optional Tests |  asflicense  mvnsite  |
| uname | Linux cc8ccce42d8b 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | YARN-2915 / 6634134 |
| modules | C: hadoop-yarn-project/hadoop-yarn 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site U: 
hadoop-yarn-project/hadoop-yarn |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/16655/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add MySql Scripts for FederationStateStore
> --
>
> Key: YARN-6853
> URL: https://issues.apache.org/jira/browse/YARN-6853
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Giovanni Matteo Fumarola
>Assignee: Giovanni Matteo Fumarola
> Attachments: YARN-6853.v1.patch, YARN-6853-YARN-2915.v1.patch, 
> YARN-6853-YARN-2915.v2.patch, YARN-6853-YARN-2915.v3.patch, 
> YARN-6853-YARN-2915.v4.patch, YARN-6853-YARN-2915.v5.patch
>
>
> In YARN-3663 we added the SQL scripts for SQLServer. We want to add the MySql 
> scripts to be able to run Federation with a MySQL servers which will be less 
> performant but convenient.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6550) Capture launch_container.sh logs

2017-08-01 Thread Suma Shivaprasad (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6550?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suma Shivaprasad updated YARN-6550:
---
Attachment: YARN-6550.002.patch

Updated patch with redirect for windows which was missed earlier.

> Capture launch_container.sh logs
> 
>
> Key: YARN-6550
> URL: https://issues.apache.org/jira/browse/YARN-6550
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-beta1
>Reporter: Wangda Tan
>Assignee: Suma Shivaprasad
> Fix For: 3.0.0-beta1
>
> Attachments: YARN-6550.002.patch, YARN-6550.patch
>
>
> launch_container.sh which generated by NM will do a bunch of things (like 
> create link, etc.) while launch a process. No logs captured until {{exec}} is 
> called. We need capture all failures of launch_container.sh for easier 
> troubleshooting.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6846) Nodemanager can fail to fully delete application local directories when applications are killed

2017-08-01 Thread Eric Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6846?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16109883#comment-16109883
 ] 

Eric Payne commented on YARN-6846:
--

The patch looks good in general, but there is one thing I'd like to clear up.

If I'm reading the man pages correctly for {{geteuid()}}, {{seteuid()}}, and 
{{readdir()}}, they don't generate {{ENOENT}}. If that is true, then the 
following changes are not necessary:
{noformat}
@@ -1837,7 +1837,7 @@ static int rmdir_as_nm(const char* path) {
@@ -1985,7 +2005,7 @@ static int recursive_unlink_helper(int dirfd, const char 
*name,
{noformat}

Thoughts?

> Nodemanager can fail to fully delete application local directories when 
> applications are killed
> ---
>
> Key: YARN-6846
> URL: https://issues.apache.org/jira/browse/YARN-6846
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.8.1
>Reporter: Jason Lowe
>Assignee: Jason Lowe
>Priority: Critical
> Attachments: YARN-6846.001.patch, YARN-6846.002.patch, 
> YARN-6846.003.patch
>
>
> When an application is killed all of the running containers are killed and 
> the app waits for the containers to complete before cleaning up.  As each 
> container completes the container directory is deleted via the 
> DeletionService.  After all containers have completed the app completes and 
> the app directory is deleted.  If the app completes quickly enough then the 
> deletion of the container and app directories can race against each other.  
> If the container deletion executor deletes a file just before the application 
> deletion executor then it can cause the application deletion executor to 
> fail, leaving the remaining entries in the application directory lingering.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6550) Capture launch_container.sh logs

2017-08-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6550?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16109878#comment-16109878
 ] 

Hadoop QA commented on YARN-6550:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
27s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
45s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 in trunk has 5 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 28s{color} 
| {color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager
 generated 2 new + 20 unchanged - 2 fixed = 22 total (was 22) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 18s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:
 The patch generated 31 new + 117 unchanged - 1 fixed = 148 total (was 118) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
56s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 generated 4 new + 5 unchanged - 0 fixed = 9 total (was 5) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
16s{color} | {color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager
 generated 4 new + 105 unchanged - 0 fixed = 109 total (was 105) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 13m 
12s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 35m  0s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | 
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
|  |  Unread field:field be static?  At ContainerLaunch.java:[line 863] |
|  |  Unread field:field be static?  At ContainerLaunch.java:[line 862] |
|  |  Format-string method String.format(String, Object[]) called with format 
string "@%s symlink "%s" "%s"" wants 3 arguments but is given 4 in 
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch$WindowsShellScriptBuilder.link(Path,
 Path)  At ContainerLaunch.java:with format string "@%s symlink "%s" "%s"" 
wants 3 arguments but is given 4 in 
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch$WindowsShellScriptBuilder.link(Path,
 Path)  At ContainerLaunch.java:[line 1138] |
|  |  Format-string method String.format(String, Object[]) called with format 
string "@if not exist "%s" mkdir "%s"" wants 2 arguments but is given 3 in 

[jira] [Commented] (YARN-5219) When an export var command fails in launch_container.sh, the full container launch should fail

2017-08-01 Thread Suma Shivaprasad (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16109875#comment-16109875
 ] 

Suma Shivaprasad commented on YARN-5219:


The updated patch LGTM. +1

> When an export var command fails in launch_container.sh, the full container 
> launch should fail
> --
>
> Key: YARN-5219
> URL: https://issues.apache.org/jira/browse/YARN-5219
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-alpha1
>Reporter: Hitesh Shah
>Assignee: Sunil G
> Attachments: YARN-5219.001.patch, YARN-5219.003.patch, 
> YARN-5219.004.patch, YARN-5219.005.patch, YARN-5219.006.patch, 
> YARN-5219.007.patch, YARN-5219-branch-2.001.patch
>
>
> Today, a container fails if certain files fail to localize. However, if 
> certain env vars fail to get setup properly either due to bugs in the yarn 
> application or misconfiguration, the actual process launch still gets 
> triggered. This results in either confusing error messages if the process 
> fails to launch or worse yet the process launches but then starts behaving 
> wrongly if the env var is used to control some behavioral aspects. 
> In this scenario, the issue was reproduced by trying to do export 
> abc="$\{foo.bar}" which is invalid as var names cannot contain "." in bash. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-6593) [API] Introduce Placement Constraint object

2017-08-01 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16109872#comment-16109872
 ] 

Jian He edited comment on YARN-6593 at 8/1/17 10:05 PM:


Actually, my comments should apply to YARN-6594  rather this jira.  

For example, I want to associate some meta info like "env=test" and 
"version=v2" info to the container and the AM can define the keys by itself. AM 
can then do select based on these key and values for the containers returned.
To me, the tag is not just used for scheduling decisions. It's also for 
associating metainfo to the container, otherwise, the AM has to maintain and 
persist the mapping by itself. If YARN natively supports this, it'll be a lot 
easier for the AM.  

bq. I have some first examples in PlacementConstraints. Do you think we should 
add them in a different class?
I was thinking having a class which lists all sorts scenarios and the 
implementation, user can pretty much copy the code as needed.  Reading javadoc 
is one option but still not straightfoward. Or if the javadoc is comprehensive 
to cover all scenarios in one place, rather than letting user look for 
different places, that should also be fine. 


was (Author: jianhe):
Actually, my comments should apply to YARN-6594  rather this jira.  

For example, I want to associate some meta info like "env=test" and 
"version=v2" info to the container and the AM can define the keys by itself. AM 
can then do select based on these key and values for the containers returned.
To me, the tag is not just used for scheduling decisions. It's also for 
associating metainfo to the container, otherwise, the AM has to maintain and 
persist the mapping by itself. If YARN natively supports this, it'll be a lot 
easier for the AM.  

bq. I have some first examples in PlacementConstraints. Do you think we should 
add them in a different class?
I was thinking having a class which lists all sorts scenarios and the API 
definitions, user can pretty much copy the semantics.  Reading javadoc is one 
option but still not straightfoward. 

> [API] Introduce Placement Constraint object
> ---
>
> Key: YARN-6593
> URL: https://issues.apache.org/jira/browse/YARN-6593
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
> Attachments: YARN-6593.001.patch, YARN-6593.002.patch, 
> YARN-6593.003.patch, YARN-6593.004.patch, YARN-6593.005.patch, 
> YARN-6593.006.patch, YARN-6593.007.patch, YARN-6593.008.patch
>
>
> Just removed Fixed version and moved it to target version as we set fix 
> version only after patch is committed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6593) [API] Introduce Placement Constraint object

2017-08-01 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16109872#comment-16109872
 ] 

Jian He commented on YARN-6593:
---

Actually, my comments should apply to YARN-6594  rather this jira.  

For example, I want to associate some meta info like "env=test" and 
"version=v2" info to the container and the AM can define the keys by itself. AM 
can then do select based on these key and values for the containers returned.
To me, the tag is not just used for scheduling decisions. It's also for 
associating metainfo to the container, otherwise, the AM has to maintain and 
persist the mapping by itself. If YARN natively supports this, it'll be a lot 
easier for the AM.  

bq. I have some first examples in PlacementConstraints. Do you think we should 
add them in a different class?
I was thinking having a class which lists all sorts scenarios and the API 
definitions, user can pretty much copy the semantics.  Reading javadoc is one 
option but still not straightfoward. 

> [API] Introduce Placement Constraint object
> ---
>
> Key: YARN-6593
> URL: https://issues.apache.org/jira/browse/YARN-6593
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
> Attachments: YARN-6593.001.patch, YARN-6593.002.patch, 
> YARN-6593.003.patch, YARN-6593.004.patch, YARN-6593.005.patch, 
> YARN-6593.006.patch, YARN-6593.007.patch, YARN-6593.008.patch
>
>
> Just removed Fixed version and moved it to target version as we set fix 
> version only after patch is committed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5534) Allow whitelisted volume mounts

2017-08-01 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16109862#comment-16109862
 ] 

Miklos Szegedi commented on YARN-5534:
--

Thank you for the patch [~shaneku...@gmail.com]. Quick question, should not 
white-list-volume-mounts be a setting in container-executor.cfg instead of 
yarn-site.xml?

> Allow whitelisted volume mounts 
> 
>
> Key: YARN-5534
> URL: https://issues.apache.org/jira/browse/YARN-5534
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: luhuichun
>Assignee: Shane Kumpf
> Attachments: YARN-5534.001.patch, YARN-5534.002.patch, 
> YARN-5534.003.patch
>
>
> Introduction 
> Mounting files or directories from the host is one way of passing 
> configuration and other information into a docker container. 
> We could allow the user to set a list of mounts in the environment of 
> ContainerLaunchContext (e.g. /dir1:/targetdir1,/dir2:/targetdir2). 
> These would be mounted read-only to the specified target locations. This has 
> been resolved in YARN-4595
> 2.Problem Definition
> Bug mounting arbitrary volumes into a Docker container can be a security risk.
> 3.Possible solutions
> one approach to provide safe mounts is to allow the cluster administrator to 
> configure a set of parent directories as white list mounting directories.
>  Add a property named yarn.nodemanager.volume-mounts.white-list, when 
> container executor do mount checking, only the allowed directories or 
> sub-directories can be mounted. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6853) Add MySql Scripts for FederationStateStore

2017-08-01 Thread Giovanni Matteo Fumarola (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16109850#comment-16109850
 ] 

Giovanni Matteo Fumarola commented on YARN-6853:


Thanks [~curino] for the feedback - fixed them all in V5.

> Add MySql Scripts for FederationStateStore
> --
>
> Key: YARN-6853
> URL: https://issues.apache.org/jira/browse/YARN-6853
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Giovanni Matteo Fumarola
>Assignee: Giovanni Matteo Fumarola
> Attachments: YARN-6853.v1.patch, YARN-6853-YARN-2915.v1.patch, 
> YARN-6853-YARN-2915.v2.patch, YARN-6853-YARN-2915.v3.patch, 
> YARN-6853-YARN-2915.v4.patch, YARN-6853-YARN-2915.v5.patch
>
>
> In YARN-3663 we added the SQL scripts for SQLServer. We want to add the MySql 
> scripts to be able to run Federation with a MySQL servers which will be less 
> performant but convenient.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6853) Add MySql Scripts for FederationStateStore

2017-08-01 Thread Giovanni Matteo Fumarola (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giovanni Matteo Fumarola updated YARN-6853:
---
Attachment: YARN-6853-YARN-2915.v5.patch

> Add MySql Scripts for FederationStateStore
> --
>
> Key: YARN-6853
> URL: https://issues.apache.org/jira/browse/YARN-6853
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Giovanni Matteo Fumarola
>Assignee: Giovanni Matteo Fumarola
> Attachments: YARN-6853.v1.patch, YARN-6853-YARN-2915.v1.patch, 
> YARN-6853-YARN-2915.v2.patch, YARN-6853-YARN-2915.v3.patch, 
> YARN-6853-YARN-2915.v4.patch, YARN-6853-YARN-2915.v5.patch
>
>
> In YARN-3663 we added the SQL scripts for SQLServer. We want to add the MySql 
> scripts to be able to run Federation with a MySQL servers which will be less 
> performant but convenient.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6788) Improve performance of resource profile branch

2017-08-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16109830#comment-16109830
 ] 

Hadoop QA commented on YARN-6788:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-3926 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
42s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
18s{color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
43s{color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
53s{color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
34s{color} | {color:green} YARN-3926 passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
5s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api in 
YARN-3926 has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
40s{color} | {color:green} YARN-3926 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
20s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 52s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 11 new + 194 unchanged - 17 fixed = 205 total (was 211) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
11s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
generated 1 new + 0 unchanged - 1 fixed = 1 total (was 1) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
45s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 71m 48s{color} 
| {color:red} hadoop-yarn in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
45s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
57s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 47m 25s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
42s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}187m  7s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api |
|  |  Possible doublecheck on 

[jira] [Comment Edited] (YARN-6550) Capture launch_container.sh logs

2017-08-01 Thread Suma Shivaprasad (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6550?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16108187#comment-16108187
 ] 

Suma Shivaprasad edited comment on YARN-6550 at 8/1/17 9:31 PM:


Attached patch which redirects pre launch steps command's(link, mkdir, export 
etc) stdout and stderr in launch_container.sh to prelaunch.out and 
prelaunch.err  respectively. These logs are available in the container log 
directory along with application specific stderr and stdout. 




was (Author: suma.shivaprasad):
Attached patch which redirects  pre launch stdout and stderr in 
launch_container.sh to prelaunch.out and prelaunch.err  respectively. These 
logs are available in the container log directory along with application 
specific stderr and stdout. 



> Capture launch_container.sh logs
> 
>
> Key: YARN-6550
> URL: https://issues.apache.org/jira/browse/YARN-6550
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-beta1
>Reporter: Wangda Tan
>Assignee: Suma Shivaprasad
> Fix For: 3.0.0-beta1
>
> Attachments: YARN-6550.patch
>
>
> launch_container.sh which generated by NM will do a bunch of things (like 
> create link, etc.) while launch a process. No logs captured until {{exec}} is 
> called. We need capture all failures of launch_container.sh for easier 
> troubleshooting.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-6550) Capture launch_container.sh logs

2017-08-01 Thread Suma Shivaprasad (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6550?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16108187#comment-16108187
 ] 

Suma Shivaprasad edited comment on YARN-6550 at 8/1/17 9:30 PM:


Attached patch which redirects  pre launch stdout and stderr in 
launch_container.sh to prelaunch.out and prelaunch.err  respectively. These 
logs are available in the container log directory along with application 
specific stderr and stdout. 




was (Author: suma.shivaprasad):
Attached patch which redirects  pre launch stdout and stderr in 
launch_container.sh to prelaunch.out and prelaunch.err. These logs are 
available in the container log directory along with application specific stderr 
and stdout. 



> Capture launch_container.sh logs
> 
>
> Key: YARN-6550
> URL: https://issues.apache.org/jira/browse/YARN-6550
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-beta1
>Reporter: Wangda Tan
>Assignee: Suma Shivaprasad
> Fix For: 3.0.0-beta1
>
> Attachments: YARN-6550.patch
>
>
> launch_container.sh which generated by NM will do a bunch of things (like 
> create link, etc.) while launch a process. No logs captured until {{exec}} is 
> called. We need capture all failures of launch_container.sh for easier 
> troubleshooting.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6895) [FairScheduler] Preemption reservation may cause regular reservation leaks

2017-08-01 Thread Miklos Szegedi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6895?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Szegedi updated YARN-6895:
-
Attachment: YARN-6895.001.patch

> [FairScheduler] Preemption reservation may cause regular reservation leaks
> --
>
> Key: YARN-6895
> URL: https://issues.apache.org/jira/browse/YARN-6895
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 3.0.0-alpha4
>Reporter: Miklos Szegedi
>Assignee: Miklos Szegedi
>Priority: Blocker
> Attachments: YARN-6895.000.patch, YARN-6895.001.patch
>
>
> We found a limitation in the implementation of YARN-6432. If the container 
> released is smaller than the preemption request, a node reservation is 
> created that is never deleted.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4161) Capacity Scheduler : Assign single or multiple containers per heart beat driven by configuration

2017-08-01 Thread Wei Yan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16109725#comment-16109725
 ] 

Wei Yan commented on YARN-4161:
---

[~sunilg] could u help review the latest patch? :) thanks.

> Capacity Scheduler : Assign single or multiple containers per heart beat 
> driven by configuration
> 
>
> Key: YARN-4161
> URL: https://issues.apache.org/jira/browse/YARN-4161
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: capacity scheduler
>Reporter: Mayank Bansal
>Assignee: Mayank Bansal
>  Labels: oct16-medium
> Attachments: YARN-4161.002.patch, YARN-4161.003.patch, 
> YARN-4161.004.patch, YARN-4161.005.patch, YARN-4161.006.patch, 
> YARN-4161.patch, YARN-4161.patch.1
>
>
> Capacity Scheduler right now schedules multiple containers per heart beat if 
> there are more resources available in the node.
> This approach works fine however in some cases its not distribute the load 
> across the cluster hence throughput of the cluster suffers. I am adding 
> feature to drive that using configuration by that we can control the number 
> of containers assigned per heart beat.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6895) [FairScheduler] Preemption reservation may cause regular reservation leaks

2017-08-01 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16109701#comment-16109701
 ] 

Miklos Szegedi commented on YARN-6895:
--

Thank you, [~yufeigu].
We will do normal reservation, if there are no active preemptions on the node 
for the app. Does this answer your question? There is still reservation on 
other nodes if we preempt on one node but that should not be the cause of this 
regression, since that logic has been around before YARN-6432.
{code}
// The desired container won't fit here, so reserve
// Reserve only, if not reserved for preempted resources, otherwise
// we may end up with duplicate reservations
if (isReservable(capability) &&
!node.isPreemptedForApp(this) &&
reserve(pendingAsk.getPerAllocationResource(), node, reservedContainer,
type, schedulerKey)) {
{code}
I had a patch with a single class implementation but it was rejected by the 
reviewers. I think we can revisit but I would not add too many changes to this 
Jira for simplicity.


> [FairScheduler] Preemption reservation may cause regular reservation leaks
> --
>
> Key: YARN-6895
> URL: https://issues.apache.org/jira/browse/YARN-6895
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 3.0.0-alpha4
>Reporter: Miklos Szegedi
>Assignee: Miklos Szegedi
>Priority: Blocker
> Attachments: YARN-6895.000.patch
>
>
> We found a limitation in the implementation of YARN-6432. If the container 
> released is smaller than the preemption request, a node reservation is 
> created that is never deleted.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6757) Refactor the usage of yarn.nodemanager.linux-container-executor.cgroups.mount-path

2017-08-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16109640#comment-16109640
 ] 

Hadoop QA commented on YARN-6757:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  5m  
2s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
43s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 10 extant 
Findbugs warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
44s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 in trunk has 5 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
21s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
59s{color} | {color:green} root: The patch generated 0 new + 8 unchanged - 15 
fixed = 8 total (was 23) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  5m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  8m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m  
7s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
25s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 74m 25s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
39s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 13m 
22s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
15s{color} | {color:green} hadoop-yarn-site in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m  
8s{color} | 

[jira] [Commented] (YARN-6901) A CapacityScheduler app->LeafQueue deadlock found in branch-2.8

2017-08-01 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6901?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16109637#comment-16109637
 ] 

Jason Lowe commented on YARN-6901:
--

bq. I checked the code of the problematic cluster, it is a little bit different 
from branch-2.8.

OK, that explains why I couldn't line up the stacktrace with branch-2.8 code or 
recent ancestors to it and also why we've never seen this in practice.

I agree it would be nice if getQueuePath were lockless, although long-term I 
think something like YARN-6917 would be preferable to a volatile approach.  I'm 
OK with volatile in the short-term.

Why does the patch change the synchronization around the ordering policy?  That 
does not seem to have anything to do with reaching up the hierarchy.  It also 
looks like it introduces a bug if two threads try to call setOrderingPolicy at 
the same time, e.g.:
# Thread 1 notices the old ordering policy, policy A, is not null and begins to 
copy the old contents into its new policy, policy B
# Thread 2 notices the old ordering policy, policy A, is not null and begins to 
copy the old contents into its new policy, policy C
# Thread 1 sets the policy to policy B
# Thread 2 sets the policy to policy C

Now we are left with a policy that contains the entities from policy A and C 
and have lost the original entities from B, whereas the old code would result 
in a policy containing the entities of policy A, B, and C regardless of which 
thread won the race for the lock.  I think we can get rid of the lock on the 
getter, but I think it is necessary on the setter or we need to do CAS-like 
logic and loop back around if someone has swapped out the policy while we were 
copying the old one.

> A CapacityScheduler app->LeafQueue deadlock found in branch-2.8 
> 
>
> Key: YARN-6901
> URL: https://issues.apache.org/jira/browse/YARN-6901
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.8.0
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>Priority: Critical
> Attachments: YARN-6901.branch-2.8.001.patch
>
>
> Stacktrace:
> {code}
> Thread 22068: (state = BLOCKED)
>  - 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.AbstractCSQueue.getParent()
>  @bci=0, line=185 (Compiled frame)
>  - 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue.getQueuePath()
>  @bci=8, line=262 (Compiled frame)
>  - 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.allocator.AbstractContainerAllocator.getCSAssignmentFromAllocateResult(org.apache.hadoop.yarn.api.records.Resource,
>  
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.allocator.ContainerAllocation,
>  org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainer) 
> @bci=183, line=80 (Compiled frame)
>  - 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.allocator.RegularContainerAllocator.assignContainers(org.apache.hadoop.yarn.api.records.Resource,
>  
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerNode,
>  
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.SchedulingMode,
>  org.apache.hadoop.yarn.server.resourcemanager.scheduler.ResourceLimits, 
> org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainer) 
> @bci=204, line=747 (Compiled frame)
>  - 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.allocator.ContainerAllocator.assignContainers(org.apache.hadoop.yarn.api.records.Resource,
>  
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerNode,
>  
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.SchedulingMode,
>  org.apache.hadoop.yarn.server.resourcemanager.scheduler.ResourceLimits, 
> org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainer) 
> @bci=16, line=49 (Compiled frame)
>  - 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp.assignContainers(org.apache.hadoop.yarn.api.records.Resource,
>  
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerNode,
>  org.apache.hadoop.yarn.server.resourcemanager.scheduler.ResourceLimits, 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.SchedulingMode,
>  org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainer) 
> @bci=61, line=468 (Compiled frame)
>  - 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue.assignContainers(org.apache.hadoop.yarn.api.records.Resource,
>  
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerNode,
>  org.apache.hadoop.yarn.server.resourcemanager.scheduler.ResourceLimits, 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.SchedulingMode)
>  @bci=148, 

[jira] [Commented] (YARN-6550) Capture launch_container.sh logs

2017-08-01 Thread Suma Shivaprasad (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6550?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16109638#comment-16109638
 ] 

Suma Shivaprasad commented on YARN-6550:


Example launch_container.sh with the patch

{noformat}
#!/bin/bash

export 
STDOUT="/2017-07-25/hadoop-3.0.0-beta1-SNAPSHOT/logs/userlogs/application_1501616662779_0002/container_1501616662779_0002_01_01/prelaunch.out"
export 
STDERR="/2017-07-25/hadoop-3.0.0-beta1-SNAPSHOT/logs/userlogs/application_1501616662779_0002/container_1501616662779_0002_01_01/prelaunch.err"
echo "Setting up env variables" 1> >(tee -a $STDOUT >&1)
export 
HADOOP_CONF_DIR=${HADOOP_CONF_DIR:-"/2017-07-25/hadoop-3.0.0-beta1-SNAPSHOT/etc/hadoop"}
 2> >(tee -a $STDERR >&2)
export YARN_CONTAINER_RUNTIME_TYPE="docker" 2> >(tee -a $STDERR >&2)
export 
JAVA_HOME=${JAVA_HOME:-"/Library/Java/JavaVirtualMachines/jdk1.8.0_131.jdk/Contents/Home"}
 2> >(tee -a $STDERR >&2)
export YARN_CONTAINER_RUNTIME_DOCKER_IMAGE="sequenceiq/hadoop-docker" 2> >(tee 
-a $STDERR >&2)
export APP_SUBMIT_TIME_ENV="1501617165715" 2> >(tee -a $STDERR >&2)
export NM_HOST="10.22.16.92" 2> >(tee -a $STDERR >&2)
export LD_LIBRARY_PATH="$PWD:$HADOOP_COMMON_HOME/lib/native" 2> >(tee -a 
$STDERR >&2)
export 
HADOOP_HDFS_HOME=${HADOOP_HDFS_HOME:-"/2017-07-25/hadoop-3.0.0-beta1-SNAPSHOT"} 
2> >(tee -a $STDERR >&2)
export LOGNAME="sshivaprasad" 2> >(tee -a $STDERR >&2)
export JVM_PID="$$" 2> >(tee -a $STDERR >&2)
export 
HADOOP_MAPRED_HOME=${HADOOP_MAPRED_HOME:-"/2017-07-25/hadoop-3.0.0-beta1-SNAPSHOT"}
 2> >(tee -a $STDERR >&2)
export 
PWD="/tmp/hadoop-sshivaprasad/nm-local-dir/usercache/sshivaprasad/appcache/application_1501616662779_0002/container_1501616662779_0002_01_01"
 2> >(tee -a $STDERR >&2)
export 
HADOOP_COMMON_HOME=${HADOOP_COMMON_HOME:-"/2017-07-25/hadoop-3.0.0-beta1-SNAPSHOT"}
 2> >(tee -a $STDERR >&2)
export 
LOCAL_DIRS="/tmp/hadoop-sshivaprasad/nm-local-dir/usercache/sshivaprasad/appcache/application_1501616662779_0002"
 2> >(tee -a $STDERR >&2)
export APPLICATION_WEB_PROXY_BASE="/proxy/application_1501616662779_0002" 2> 
>(tee -a $STDERR >&2)
export SHELL="/bin/bash" 2> >(tee -a $STDERR >&2)
export NM_HTTP_PORT="8042" 2> >(tee -a $STDERR >&2)
export 
LOG_DIRS="/2017-07-25/hadoop-3.0.0-beta1-SNAPSHOT/logs/userlogs/application_1501616662779_0002/container_1501616662779_0002_01_01"
 2> >(tee -a $STDERR >&2)
export 
NM_AUX_SERVICE_mapreduce_shuffle="AAA0+gA=
" 2> >(tee -a $STDERR >&2)
export NM_PORT="9" 2> >(tee -a $STDERR >&2)
export USER="sshivaprasad" 2> >(tee -a $STDERR >&2)
export 
HADOOP_YARN_HOME=${HADOOP_YARN_HOME:-"/2017-07-25/hadoop-3.0.0-beta1-SNAPSHOT"} 
2> >(tee -a $STDERR >&2)
export 
CLASSPATH="$PWD:$HADOOP_CONF_DIR:$HADOOP_COMMON_HOME/share/hadoop/common/*:$HADOOP_COMMON_HOME/share/hadoop/common/lib/*:$HADOOP_HDFS_HOME/share/hadoop/hdfs/*:$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*:$HADOOP_YARN_HOME/share/hadoop/yarn/*:$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*:$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*:$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*:job.jar/job.jar:job.jar/classes/:job.jar/lib/*:$PWD/*"
 2> >(tee -a $STDERR >&2)
export 
HADOOP_TOKEN_FILE_LOCATION="/tmp/hadoop-sshivaprasad/nm-local-dir/usercache/sshivaprasad/appcache/application_1501616662779_0002/container_1501616662779_0002_01_01/container_tokens"
 2> >(tee -a $STDERR >&2)
export 
LOCAL_USER_DIRS="/tmp/hadoop-sshivaprasad/nm-local-dir/usercache/sshivaprasad/" 
2> >(tee -a $STDERR >&2)
export HOME="/home/" 2> >(tee -a $STDERR >&2)
export CONTAINER_ID="container_1501616662779_0002_01_01" 2> >(tee -a 
$STDERR >&2)
export MALLOC_ARENA_MAX="" 2> >(tee -a $STDERR >&2)
echo "Setting up job resources" 1> >(tee -a $STDOUT >&1)
ln -sf 
"/tmp/hadoop-sshivaprasad/nm-local-dir/usercache/sshivaprasad/appcache/application_1501616662779_0002/filecache/11/job.jar"
 "job.jar" 2> >(tee -a $STDERR >&2)
hadoop_shell_errorcode=$?
if [ $hadoop_shell_errorcode -ne 0 ]
then
  exit $hadoop_shell_errorcode
fi
mkdir -p jobSubmitDir 2> >(tee -a $STDERR >&2)
hadoop_shell_errorcode=$?
if [ $hadoop_shell_errorcode -ne 0 ]
then
  exit $hadoop_shell_errorcode
fi
ln -sf 
"/tmp/hadoop-sshivaprasad/nm-local-dir/usercache/sshivaprasad/appcache/application_1501616662779_0002/filecache/12/job.split"
 "jobSubmitDir/job.split" 2> >(tee -a $STDERR >&2)
hadoop_shell_errorcode=$?
if [ $hadoop_shell_errorcode -ne 0 ]
then
  exit $hadoop_shell_errorcode
fi
ln -sf 
"/tmp/hadoop-sshivaprasad/nm-local-dir/usercache/sshivaprasad/appcache/application_1501616662779_0002/filecache/13/job.xml"
 "job.xml" 2> >(tee -a $STDERR >&2)
hadoop_shell_errorcode=$?
if [ $hadoop_shell_errorcode -ne 0 ]
then
  exit $hadoop_shell_errorcode
fi
mkdir -p jobSubmitDir 2> >(tee -a $STDERR >&2)
hadoop_shell_errorcode=$?
if [ $hadoop_shell_errorcode -ne 0 ]
then
  exit $hadoop_shell_errorcode
fi
ln -sf 

[jira] [Commented] (YARN-6903) Yarn-native-service framework core rewrite

2017-08-01 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6903?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16109631#comment-16109631
 ] 

Jian He commented on YARN-6903:
---

yeah, makes sense. I'll do that separately. 

> Yarn-native-service framework core rewrite
> --
>
> Key: YARN-6903
> URL: https://issues.apache.org/jira/browse/YARN-6903
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jian He
>Assignee: Jian He
> Attachments: YARN-6903.yarn-native-services.01.patch, 
> YARN-6903.yarn-native-services.02.patch
>
>
> There are some new features like rich placement scheduling, container auto 
> restart, container upgrade in YARN core that can be taken advantage by the 
> native-service framework. Besides, there are quite a lot legacy code which 
> are no longer required. 
> So we decide to rewrite the core part to have a leaner codebase and make use 
> of various advanced features in YARN. 
> And the new code design will be in align with what we have designed for the 
> service API YARN-4793



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6903) Yarn-native-service framework core rewrite

2017-08-01 Thread Vinod Kumar Vavilapalli (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6903?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16109624#comment-16109624
 ] 

Vinod Kumar Vavilapalli commented on YARN-6903:
---

I think we should also split the API, client and AM modules - may be in a 
different JIRA.

> Yarn-native-service framework core rewrite
> --
>
> Key: YARN-6903
> URL: https://issues.apache.org/jira/browse/YARN-6903
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jian He
>Assignee: Jian He
> Attachments: YARN-6903.yarn-native-services.01.patch, 
> YARN-6903.yarn-native-services.02.patch
>
>
> There are some new features like rich placement scheduling, container auto 
> restart, container upgrade in YARN core that can be taken advantage by the 
> native-service framework. Besides, there are quite a lot legacy code which 
> are no longer required. 
> So we decide to rewrite the core part to have a leaner codebase and make use 
> of various advanced features in YARN. 
> And the new code design will be in align with what we have designed for the 
> service API YARN-4793



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



  1   2   3   >