[jira] [Commented] (YARN-6042) Dump scheduler and queue state information into FairScheduler DEBUG log

2017-03-02 Thread Tao Jie (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15893880#comment-15893880
 ] 

Tao Jie commented on YARN-6042:
---

Hi [~yufeigu], dumping scheduler/queue state is very useful to detect 
scheduling problem at run-time. It seems to me that you try write 
scheduler/queue information to log file. How about print this information on 
the webui, just like we can get server stacks by a link. 

> Dump scheduler and queue state information into FairScheduler DEBUG log
> ---
>
> Key: YARN-6042
> URL: https://issues.apache.org/jira/browse/YARN-6042
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Attachments: YARN-6042.001.patch, YARN-6042.002.patch, 
> YARN-6042.003.patch, YARN-6042.004.patch, YARN-6042.005.patch, 
> YARN-6042.006.patch, YARN-6042.007.patch, YARN-6042.008.patch
>
>
> To improve the debugging of scheduler issues it would be a big improvement to 
> be able to dump the scheduler state into a log on request. 
> The Dump the scheduler state at a point in time would allow debugging of a 
> scheduler that is not hung (deadlocked) but also not assigning containers. 
> Currently we do not have a proper overview of what state the scheduler and 
> the queues are in and we have to make assumptions or guess
> The scheduler and queue state needed would include (not exhaustive):
> - instantaneous and steady fair share (app / queue)
> - AM share and resources
> - weight
> - app demand
> - application run state (runnable/non runnable)
> - last time at fair/min share



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-6274) One error in the documentation of hadoop 2.7.3

2017-03-02 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6274?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang reassigned YARN-6274:
-

Assignee: Weiwei Yang

> One error in the documentation of hadoop 2.7.3
> --
>
> Key: YARN-6274
> URL: https://issues.apache.org/jira/browse/YARN-6274
> Project: Hadoop YARN
>  Issue Type: Task
>  Components: documentation
>Affects Versions: 2.7.3
>Reporter: Charles Zhang
>Assignee: Weiwei Yang
>Priority: Trivial
>  Labels: beginner, easyfix
> Fix For: 2.7.3
>
>
> I think one parameter in the "Monitoring Health of NodeManagers" section of  
> "Cluster Setup" is wrong.The parameter 
> "yarn.nodemanager.health-checker.script.interval-ms" should be 
> “yarn.nodemanager.health-checker.interval-ms”.See 
> http://hadoop.apache.org/docs/r2.7.3/hadoop-project-dist/hadoop-common/ClusterSetup.html.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6275) Fail to show real-time tracking charts in SLS

2017-03-02 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6275?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-6275:
---
Description: 
Stack trace:
{code}
java.lang.NullPointerException
at 
org.eclipse.jetty.server.handler.ResourceHandler.handle(ResourceHandler.java:499)
at org.apache.hadoop.yarn.sls.web.SLSWebApp$1.handle(SLSWebApp.java:152)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
at org.eclipse.jetty.server.Server.handle(Server.java:524)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:319)
at 
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:253)
at 
org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
at 
org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)
at java.lang.Thread.run(Thread.java:745)
{code}

  was:
{code}
java.lang.NullPointerException
at 
org.eclipse.jetty.server.handler.ResourceHandler.handle(ResourceHandler.java:499)
at org.apache.hadoop.yarn.sls.web.SLSWebApp$1.handle(SLSWebApp.java:152)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
at org.eclipse.jetty.server.Server.handle(Server.java:524)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:319)
at 
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:253)
at 
org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
at 
org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)
at java.lang.Thread.run(Thread.java:745)
{code}


> Fail to show real-time tracking charts in SLS
> -
>
> Key: YARN-6275
> URL: https://issues.apache.org/jira/browse/YARN-6275
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: scheduler-load-simulator
>Affects Versions: 3.0.0-alpha2
>Reporter: Yufei Gu
>Assignee: Yufei Gu
>
> Stack trace:
> {code}
> java.lang.NullPointerException
>   at 
> org.eclipse.jetty.server.handler.ResourceHandler.handle(ResourceHandler.java:499)
>   at org.apache.hadoop.yarn.sls.web.SLSWebApp$1.handle(SLSWebApp.java:152)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
>   at org.eclipse.jetty.server.Server.handle(Server.java:524)
>   at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:319)
>   at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:253)
>   at 
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
>   at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
>   at 
> org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
>   at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
>   at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
>   at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)
>   at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (YARN-6275) Fail to show real-time tracking charts in SLS

2017-03-02 Thread Yufei Gu (JIRA)
Yufei Gu created YARN-6275:
--

 Summary: Fail to show real-time tracking charts in SLS
 Key: YARN-6275
 URL: https://issues.apache.org/jira/browse/YARN-6275
 Project: Hadoop YARN
  Issue Type: Bug
  Components: scheduler-load-simulator
Affects Versions: 3.0.0-alpha2
Reporter: Yufei Gu
Assignee: Yufei Gu


{code}
java.lang.NullPointerException
at 
org.eclipse.jetty.server.handler.ResourceHandler.handle(ResourceHandler.java:499)
at org.apache.hadoop.yarn.sls.web.SLSWebApp$1.handle(SLSWebApp.java:152)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
at org.eclipse.jetty.server.Server.handle(Server.java:524)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:319)
at 
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:253)
at 
org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
at 
org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)
at java.lang.Thread.run(Thread.java:745)
{code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6274) One error in the documentation of hadoop 2.7.3

2017-03-02 Thread Charles Zhang (JIRA)
Charles Zhang created YARN-6274:
---

 Summary: One error in the documentation of hadoop 2.7.3
 Key: YARN-6274
 URL: https://issues.apache.org/jira/browse/YARN-6274
 Project: Hadoop YARN
  Issue Type: Task
  Components: documentation
Affects Versions: 2.7.3
Reporter: Charles Zhang
Priority: Trivial
 Fix For: 2.7.3


I think one parameter in the "Monitoring Health of NodeManagers" section of  
"Cluster Setup" is wrong.The parameter 
"yarn.nodemanager.health-checker.script.interval-ms" should be 
“yarn.nodemanager.health-checker.interval-ms”.See 
http://hadoop.apache.org/docs/r2.7.3/hadoop-project-dist/hadoop-common/ClusterSetup.html.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6268) Container with extra data

2017-03-02 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15893850#comment-15893850
 ] 

Rohith Sharma K S commented on YARN-6268:
-

IIUC your use case, these different type container management to be done from 
ApplicationMaster. AM can decide to which process container to be allocated by 
tracking resource request. May be you can look at MRAppMaster code which 
handles in similar fashion for Mapper and Reducer containers. 

> Container with extra data
> -
>
> Key: YARN-6268
> URL: https://issues.apache.org/jira/browse/YARN-6268
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: api
>Affects Versions: 3.0.0-alpha2
>Reporter: 冯健
>
> implement a container which can take extra data (eg: some data user define).  
> so user can do some operation with that



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-5153) [YARN-3368] Add a toggle to switch timeline view / table view for containers information inside application-attempt page

2017-03-02 Thread Akhil PB (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15893785#comment-15893785
 ] 

Akhil PB edited comment on YARN-5153 at 3/3/17 6:47 AM:


Hi [~leftnoteasy], assigning the ticket to myself since I have a patch done 
along with [YARN-5705|https://issues.apache.org/jira/browse/YARN-5705]. I  will 
separate the timeline/table view patch from YARN-5705 and will fix in this 
ticket so that YARN-5705 can be an independent one.

cc [~sunilg]


was (Author: akhilpb):
Hi [~wangda], assigning the ticket to myself since I have a patch done along 
with [YARN-5705|https://issues.apache.org/jira/browse/YARN-5705]. I  will 
separate the timeline/table view patch from YARN-5705 and will fix in this 
ticket so that YARN-5705 can be an independent one.

cc [~sunilg]

> [YARN-3368] Add a toggle to switch timeline view / table view for containers 
> information inside application-attempt page
> 
>
> Key: YARN-5153
> URL: https://issues.apache.org/jira/browse/YARN-5153
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: webapp
>Reporter: Wangda Tan
>Assignee: Akhil PB
> Attachments: screenshot-1.png, screenshot-2.png, screenshot-3.png, 
> screenshot-4.png, screenshot-5.png, YARN-5153.preliminary.1.patch, 
> YARN-5153-YARN-3368.1.patch
>
>
> Now we only support timeline view for containers on app-attempt page, it will 
> be also very useful to show table of containers in some cases. For example, 
> user can short containers based on priority, etc.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6153) keepContainer does not work when AM retry window is set

2017-03-02 Thread kyungwan nam (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6153?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

kyungwan nam updated YARN-6153:
---
Attachment: (was: YARN-6153.006-1.patch)

> keepContainer does not work when AM retry window is set
> ---
>
> Key: YARN-6153
> URL: https://issues.apache.org/jira/browse/YARN-6153
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.7.1
>Reporter: kyungwan nam
>Assignee: kyungwan nam
> Fix For: 2.8.0, 3.0.0-alpha3
>
> Attachments: YARN-6153.001.patch, YARN-6153.002.patch, 
> YARN-6153.003.patch, YARN-6153.004.patch, YARN-6153.005.patch, 
> YARN-6153.006.patch, YARN-6153-branch-2.8.patch
>
>
> yarn.resourcemanager.am.max-attempts has been configured to 2 in my cluster.
> I submitted a YARN application (slider app) that keepContainers=true, 
> attemptFailuresValidityInterval=30.
> it did work properly when AM was failed firstly.
> all containers launched by previous AM were resynced with new AM (attempt2) 
> without killing containers.
> after 10 minutes, I thought AM failure count was reset by 
> attemptFailuresValidityInterval (5 minutes).
> but, all containers were killed when AM was failed secondly. (new AM attempt3 
> was launched properly)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6153) keepContainer does not work when AM retry window is set

2017-03-02 Thread kyungwan nam (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6153?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

kyungwan nam updated YARN-6153:
---
Attachment: (was: YARN-6153-branch-2.8.patch)

> keepContainer does not work when AM retry window is set
> ---
>
> Key: YARN-6153
> URL: https://issues.apache.org/jira/browse/YARN-6153
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.7.1
>Reporter: kyungwan nam
>Assignee: kyungwan nam
> Fix For: 2.8.0, 3.0.0-alpha3
>
> Attachments: YARN-6153.001.patch, YARN-6153.002.patch, 
> YARN-6153.003.patch, YARN-6153.004.patch, YARN-6153.005.patch, 
> YARN-6153.006-1.patch, YARN-6153.006.patch, YARN-6153-branch-2.8.patch
>
>
> yarn.resourcemanager.am.max-attempts has been configured to 2 in my cluster.
> I submitted a YARN application (slider app) that keepContainers=true, 
> attemptFailuresValidityInterval=30.
> it did work properly when AM was failed firstly.
> all containers launched by previous AM were resynced with new AM (attempt2) 
> without killing containers.
> after 10 minutes, I thought AM failure count was reset by 
> attemptFailuresValidityInterval (5 minutes).
> but, all containers were killed when AM was failed secondly. (new AM attempt3 
> was launched properly)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6153) keepContainer does not work when AM retry window is set

2017-03-02 Thread kyungwan nam (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6153?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

kyungwan nam updated YARN-6153:
---
Attachment: YARN-6153-branch-2.8.patch

Thanks for your comment...
I'm uploading a new patch for the branch-2.8.
the system clock in the RMAppImpl will be used for checking validity interval.


> keepContainer does not work when AM retry window is set
> ---
>
> Key: YARN-6153
> URL: https://issues.apache.org/jira/browse/YARN-6153
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.7.1
>Reporter: kyungwan nam
>Assignee: kyungwan nam
> Fix For: 2.8.0, 3.0.0-alpha3
>
> Attachments: YARN-6153.001.patch, YARN-6153.002.patch, 
> YARN-6153.003.patch, YARN-6153.004.patch, YARN-6153.005.patch, 
> YARN-6153.006-1.patch, YARN-6153.006.patch, YARN-6153-branch-2.8.patch
>
>
> yarn.resourcemanager.am.max-attempts has been configured to 2 in my cluster.
> I submitted a YARN application (slider app) that keepContainers=true, 
> attemptFailuresValidityInterval=30.
> it did work properly when AM was failed firstly.
> all containers launched by previous AM were resynced with new AM (attempt2) 
> without killing containers.
> after 10 minutes, I thought AM failure count was reset by 
> attemptFailuresValidityInterval (5 minutes).
> but, all containers were killed when AM was failed secondly. (new AM attempt3 
> was launched properly)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5153) [YARN-3368] Add a toggle to switch timeline view / table view for containers information inside application-attempt page

2017-03-02 Thread Akhil PB (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15893785#comment-15893785
 ] 

Akhil PB commented on YARN-5153:


Hi [~wangda], assigning the ticket to myself since I have a patch done along 
with [YARN-5705|https://issues.apache.org/jira/browse/YARN-5705]. I  will 
separate the timeline/table view patch from YARN-5705 and will fix in this 
ticket so that YARN-5705 can be an independent one.

cc [~sunilg]

> [YARN-3368] Add a toggle to switch timeline view / table view for containers 
> information inside application-attempt page
> 
>
> Key: YARN-5153
> URL: https://issues.apache.org/jira/browse/YARN-5153
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: webapp
>Reporter: Wangda Tan
>Assignee: Akhil PB
> Attachments: screenshot-1.png, screenshot-2.png, screenshot-3.png, 
> screenshot-4.png, screenshot-5.png, YARN-5153.preliminary.1.patch, 
> YARN-5153-YARN-3368.1.patch
>
>
> Now we only support timeline view for containers on app-attempt page, it will 
> be also very useful to show table of containers in some cases. For example, 
> user can short containers based on priority, etc.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6256) Add FROM_ID info key for timeline entities in reader response.

2017-03-02 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15893779#comment-15893779
 ] 

Rohith Sharma K S commented on YARN-6256:
-

bq. I do have another question (which also applies to YARN-6027). It seems that 
we're embedding the FROMID value for every single entity we return. Would it be 
possible to do this only for the "last" entity that gets returned if we're 
returning multiple entities? This might be a secondary optimization, but for 
all other entities but the last, the FROMID value would be ignored. In that 
sense, it would simply add to the payload size without providing benefits. 
Thoughts?
Actually this one of the point I was thought some time during YARN-6027. There 
are couple of points
# Consistency : when we retrieve multiple entities default template of entities 
should contains FROM_ID otherwise it creates a confusion to the user and also 
makes user to think that it is a bug at least in first observation.
# Another advantage is, UI will have option to pick up FROM_ID from any 
entities. 
# May be we can skip for single entity reader. I do not see any valid reason to 
add in single entity retrieval.
# Regarding payload issue, I do not think adding FROM_ID field causes bigger 
impact on payload size since its size is very less. However I am not come 
across any issue earlier with respect to payload size, would you mind 
elaborating more on what would go wrong in production cluster.? IIUC, the 
amount of data retrieved will increase by x-factor. But I believe x-factor is 
very small. 

> Add FROM_ID info key for timeline entities in reader response. 
> ---
>
> Key: YARN-6256
> URL: https://issues.apache.org/jira/browse/YARN-6256
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>  Labels: yarn-5355-merge-blocker
> Attachments: YARN-6256-YARN-5355.0001.patch
>
>
> It is continuation with YARN-6027 to add FROM_ID key in all other timeline 
> entity responses which includes
> # Flow run entity response. 
> # Application entity response
> # Generic timeline entity response - Here we need to retrospect on idprefix 
> filter which is now separately provided. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6247) Share a single instance of SubClusterResolver instead of instantiating one per AM

2017-03-02 Thread Botong Huang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15893774#comment-15893774
 ] 

Botong Huang commented on YARN-6247:


Thanks [~subru] for the review! 

> Share a single instance of SubClusterResolver instead of instantiating one 
> per AM
> -
>
> Key: YARN-6247
> URL: https://issues.apache.org/jira/browse/YARN-6247
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Botong Huang
>Assignee: Botong Huang
>Priority: Minor
> Fix For: YARN-2915
>
> Attachments: YARN-6247-YARN-2915.v1.patch, 
> YARN-6247-YARN-2915.v2.patch, YARN-6247-YARN-2915.v3.patch, 
> YARN-6247-YARN-2915.v4.patch
>
>
> Add SubClusterResolver into FederationStateStoreFacade. Since the resolver 
> might involve some overhead (read file in the background, potentially 
> periodically), it is good to put it inside FederationStateStoreFacade 
> singleton, so that only one instance will be created. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-5153) [YARN-3368] Add a toggle to switch timeline view / table view for containers information inside application-attempt page

2017-03-02 Thread Akhil PB (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5153?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akhil PB reassigned YARN-5153:
--

Assignee: Akhil PB  (was: Wangda Tan)

> [YARN-3368] Add a toggle to switch timeline view / table view for containers 
> information inside application-attempt page
> 
>
> Key: YARN-5153
> URL: https://issues.apache.org/jira/browse/YARN-5153
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: webapp
>Reporter: Wangda Tan
>Assignee: Akhil PB
> Attachments: screenshot-1.png, screenshot-2.png, screenshot-3.png, 
> screenshot-4.png, screenshot-5.png, YARN-5153.preliminary.1.patch, 
> YARN-5153-YARN-3368.1.patch
>
>
> Now we only support timeline view for containers on app-attempt page, it will 
> be also very useful to show table of containers in some cases. For example, 
> user can short containers based on priority, etc.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6249) TestFairSchedulerPreemption is inconsistently failing on trunk

2017-03-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15893765#comment-15893765
 ] 

Hadoop QA commented on YARN-6249:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 40m 
54s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 63m 44s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-6249 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12855774/YARN-6249.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 4948f58df8d5 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 3749152 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/15146/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/15146/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> TestFairSchedulerPreemption is inconsistently failing on trunk
> --
>
> Key: YARN-6249
> URL: https://issues.apache.org/jira/browse/YARN-6249
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler, resourcemanager
>Affects Versions: 2.9.0
>Reporter: Sean Po
>Assignee: Tao Jie
> Attachments: YARN-6249.001.patch, YARN-6249.002.patch
>
>
> Tests 

[jira] [Commented] (YARN-5147) [YARN-3368] Showing JMX metrics for YARN servers on new YARN UI

2017-03-02 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5147?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15893758#comment-15893758
 ] 

Sunil G commented on YARN-5147:
---

Possibly YARN-5148 covers the same

> [YARN-3368] Showing JMX metrics for YARN servers on new YARN UI
> ---
>
> Key: YARN-5147
> URL: https://issues.apache.org/jira/browse/YARN-5147
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Sreenath Somarajapuram
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5179) Issue of CPU usage of containers

2017-03-02 Thread Manikandan R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15893747#comment-15893747
 ] 

Manikandan R commented on YARN-5179:


Thanks [~asuresh]. Added [~kasha] & [~kkaranasos] as watchers, just in case if 
they don't receive the email notifications.

> Issue of CPU usage of containers
> 
>
> Key: YARN-5179
> URL: https://issues.apache.org/jira/browse/YARN-5179
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.7.0
> Environment: Both on Windows and Linux
>Reporter: Zhongkai Mi
>
> // Multiply by 1000 to avoid losing data when converting to int 
>int milliVcoresUsed = (int) (cpuUsageTotalCoresPercentage * 1000 
>   * maxVCoresAllottedForContainers /nodeCpuPercentageForYARN); 
> This formula will not get right CPU usage based vcore if vcores != physical 
> cores. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5948) Implement MutableConfigurationManager for handling storage into configuration store

2017-03-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15893724#comment-15893724
 ] 

Hadoop QA commented on YARN-5948:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
1s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
53s{color} | {color:green} YARN-5734 passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  8m  
6s{color} | {color:red} hadoop-yarn in YARN-5734 failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
55s{color} | {color:green} YARN-5734 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
44s{color} | {color:green} YARN-5734 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
55s{color} | {color:green} YARN-5734 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
22s{color} | {color:green} YARN-5734 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
22s{color} | {color:green} YARN-5734 passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
25s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  5m 
53s{color} | {color:red} hadoop-yarn in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  5m 53s{color} 
| {color:red} hadoop-yarn in the patch failed. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 55s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 19 new + 327 unchanged - 0 fixed = 346 total (was 327) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
31s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
26s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 41m  6s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 98m 55s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.resourcemanager.TestRMRestart |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-5948 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12855714/YARN-5948-YARN-5734.003.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux 990dc4311005 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | YARN-5734 / 0b869d9 |
| Default Java | 1.8.0_121 |
| compile | 

[jira] [Commented] (YARN-6249) TestFairSchedulerPreemption is inconsistently failing on trunk

2017-03-02 Thread Tao Jie (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15893722#comment-15893722
 ] 

Tao Jie commented on YARN-6249:
---

Updated the patch for [~yufeigu]'s comments. And ran 200 times again without 
failure.

> TestFairSchedulerPreemption is inconsistently failing on trunk
> --
>
> Key: YARN-6249
> URL: https://issues.apache.org/jira/browse/YARN-6249
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler, resourcemanager
>Affects Versions: 2.9.0
>Reporter: Sean Po
>Assignee: Tao Jie
> Attachments: YARN-6249.001.patch, YARN-6249.002.patch
>
>
> Tests in TestFairSchedulerPreemption.java will inconsistently fail on trunk. 
> An example stack trace: 
> {noformat}
> Tests run: 24, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 24.879 sec 
> <<< FAILURE! - in 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption
> testPreemptionSelectNonAMContainer[MinSharePreemptionWithDRF](org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption)
>   Time elapsed: 10.475 sec  <<< FAILURE!
> java.lang.AssertionError: Incorrect number of containers on the greedy app 
> expected:<4> but was:<8>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption.verifyPreemption(TestFairSchedulerPreemption.java:288)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption.testPreemptionSelectNonAMContainer(TestFairSchedulerPreemption.java:363)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6249) TestFairSchedulerPreemption is inconsistently failing on trunk

2017-03-02 Thread Tao Jie (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6249?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tao Jie updated YARN-6249:
--
Attachment: YARN-6249.002.patch

> TestFairSchedulerPreemption is inconsistently failing on trunk
> --
>
> Key: YARN-6249
> URL: https://issues.apache.org/jira/browse/YARN-6249
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler, resourcemanager
>Affects Versions: 2.9.0
>Reporter: Sean Po
>Assignee: Tao Jie
> Attachments: YARN-6249.001.patch, YARN-6249.002.patch
>
>
> Tests in TestFairSchedulerPreemption.java will inconsistently fail on trunk. 
> An example stack trace: 
> {noformat}
> Tests run: 24, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 24.879 sec 
> <<< FAILURE! - in 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption
> testPreemptionSelectNonAMContainer[MinSharePreemptionWithDRF](org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption)
>   Time elapsed: 10.475 sec  <<< FAILURE!
> java.lang.AssertionError: Incorrect number of containers on the greedy app 
> expected:<4> but was:<8>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption.verifyPreemption(TestFairSchedulerPreemption.java:288)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption.testPreemptionSelectNonAMContainer(TestFairSchedulerPreemption.java:363)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6267) Can't create directory xxx/application_1488445897886_0001 - Permission denied

2017-03-02 Thread cjn082030 (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6267?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15893696#comment-15893696
 ] 

cjn082030 commented on YARN-6267:
-

Sorry,I get it.

> Can't create directory xxx/application_1488445897886_0001 - Permission denied
> -
>
> Key: YARN-6267
> URL: https://issues.apache.org/jira/browse/YARN-6267
> Project: Hadoop YARN
>  Issue Type: Test
>  Components: yarn
> Environment: yarn-site.xml:
> yarn.nodemanager.container-executor.class = 
> org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor,
>Reporter: cjn082030
>
> Sorry I get a problem about yarn,could anyone can help?
> I run wordcount(MapReduce job).The MapReduce job is FAILED.
> Diagnostics Info:
> Application application_1488445897886_0001 failed 2 times due to AM Container 
> for appattempt_1488445897886_0001_02 exited with exitCode: -1000
> For more detailed output, check application tracking 
> page:http://test:8088/cluster/app/application_1488445897886_0001Then, click 
> on links to logs of each attempt.
> Diagnostics: Application application_1488445897886_0001 initialization failed 
> (exitCode=255) with output: main : command provided 0
> main : user is nobody
> main : requested yarn user is wang1
> Can't create directory 
> /cjntest/tmp/yarn/local-dirs/usercache/wang1/appcache/application_1488445897886_0001
>  - Permission denied
> Did not create any app directories
> Failing this attempt. Failing the application.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-6249) TestFairSchedulerPreemption is inconsistently failing on trunk

2017-03-02 Thread Tao Jie (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6249?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tao Jie reassigned YARN-6249:
-

Assignee: Tao Jie  (was: Yufei Gu)

> TestFairSchedulerPreemption is inconsistently failing on trunk
> --
>
> Key: YARN-6249
> URL: https://issues.apache.org/jira/browse/YARN-6249
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler, resourcemanager
>Affects Versions: 2.9.0
>Reporter: Sean Po
>Assignee: Tao Jie
> Attachments: YARN-6249.001.patch
>
>
> Tests in TestFairSchedulerPreemption.java will inconsistently fail on trunk. 
> An example stack trace: 
> {noformat}
> Tests run: 24, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 24.879 sec 
> <<< FAILURE! - in 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption
> testPreemptionSelectNonAMContainer[MinSharePreemptionWithDRF](org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption)
>   Time elapsed: 10.475 sec  <<< FAILURE!
> java.lang.AssertionError: Incorrect number of containers on the greedy app 
> expected:<4> but was:<8>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption.verifyPreemption(TestFairSchedulerPreemption.java:288)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption.testPreemptionSelectNonAMContainer(TestFairSchedulerPreemption.java:363)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6249) TestFairSchedulerPreemption is inconsistently failing on trunk

2017-03-02 Thread Tao Jie (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15893644#comment-15893644
 ] 

Tao Jie commented on YARN-6249:
---

Thank you [~yufeigu] [~miklos.szeg...@cloudera.com] for your reply!
{quote}
 Would it make sense to initialize control clock before set it to scheduler 
like this?
{quote}
Agree! It makes this test more close to the real world.

> TestFairSchedulerPreemption is inconsistently failing on trunk
> --
>
> Key: YARN-6249
> URL: https://issues.apache.org/jira/browse/YARN-6249
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler, resourcemanager
>Affects Versions: 2.9.0
>Reporter: Sean Po
>Assignee: Yufei Gu
> Attachments: YARN-6249.001.patch
>
>
> Tests in TestFairSchedulerPreemption.java will inconsistently fail on trunk. 
> An example stack trace: 
> {noformat}
> Tests run: 24, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 24.879 sec 
> <<< FAILURE! - in 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption
> testPreemptionSelectNonAMContainer[MinSharePreemptionWithDRF](org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption)
>   Time elapsed: 10.475 sec  <<< FAILURE!
> java.lang.AssertionError: Incorrect number of containers on the greedy app 
> expected:<4> but was:<8>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption.verifyPreemption(TestFairSchedulerPreemption.java:288)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption.testPreemptionSelectNonAMContainer(TestFairSchedulerPreemption.java:363)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6247) Share a single instance of SubClusterResolver instead of instantiating one per AM

2017-03-02 Thread Subru Krishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6247?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subru Krishnan updated YARN-6247:
-
Summary: Share a single instance of SubClusterResolver instead of 
instantiating one per AM  (was: Add SubClusterResolver into 
FederationStateStoreFacade)

> Share a single instance of SubClusterResolver instead of instantiating one 
> per AM
> -
>
> Key: YARN-6247
> URL: https://issues.apache.org/jira/browse/YARN-6247
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Botong Huang
>Assignee: Botong Huang
>Priority: Minor
> Attachments: YARN-6247-YARN-2915.v1.patch, 
> YARN-6247-YARN-2915.v2.patch, YARN-6247-YARN-2915.v3.patch, 
> YARN-6247-YARN-2915.v4.patch
>
>
> Add SubClusterResolver into FederationStateStoreFacade. Since the resolver 
> might involve some overhead (read file in the background, potentially 
> periodically), it is good to put it inside FederationStateStoreFacade 
> singleton, so that only one instance will be created. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-6268) Container with extra data

2017-03-02 Thread JIRA

[ 
https://issues.apache.org/jira/browse/YARN-6268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15893594#comment-15893594
 ] 

冯健 edited comment on YARN-6268 at 3/3/17 2:24 AM:
--

for example , Application 1 has two different type processes need to be 
launched in container like process A and B , process A need two containers, and 
process B need one container.  after user  send three  resource ask to RM ,   
then user need know what type process they need to launch with the three 
allocated containers.


was (Author: fengjian):
for example , Application 1 has two different type processes need to be 
launched in container like process A and B , process A need two containers, and 
process B need one containers.  after user  send three  resource ask to RM ,   
then user need know what type process they need to launch with three allocated 
containers.

> Container with extra data
> -
>
> Key: YARN-6268
> URL: https://issues.apache.org/jira/browse/YARN-6268
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: api
>Affects Versions: 3.0.0-alpha2
>Reporter: 冯健
>
> implement a container which can take extra data (eg: some data user define).  
> so user can do some operation with that



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6268) Container with extra data

2017-03-02 Thread JIRA

[ 
https://issues.apache.org/jira/browse/YARN-6268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15893594#comment-15893594
 ] 

冯健 commented on YARN-6268:
--

for example , Application 1 has two different type processes need to be 
launched in container like process A and B , process A need two containers, and 
process B need one containers.  after user  send three  resource ask to RM ,   
then user need know what type process they need to launch with three allocated 
containers.

> Container with extra data
> -
>
> Key: YARN-6268
> URL: https://issues.apache.org/jira/browse/YARN-6268
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: api
>Affects Versions: 3.0.0-alpha2
>Reporter: 冯健
>
> implement a container which can take extra data (eg: some data user define).  
> so user can do some operation with that



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5948) Implement MutableConfigurationManager for handling storage into configuration store

2017-03-02 Thread Jonathan Hung (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15893570#comment-15893570
 ] 

Jonathan Hung commented on YARN-5948:
-

Not sure atm what this compiler error is about, was able to reproduce it 
locally with an old version of npm (1.3.24), and not with a newer version, 
seems YARN-5868 might be related but this commit seems to be in the feature 
branch already.

Attempting rebase onto trunk and rekicking build.

> Implement MutableConfigurationManager for handling storage into configuration 
> store
> ---
>
> Key: YARN-5948
> URL: https://issues.apache.org/jira/browse/YARN-5948
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
> Attachments: YARN-5948.001.patch, YARN-5948-YARN-5734.002.patch, 
> YARN-5948-YARN-5734.003.patch
>
>
> The MutableConfigurationManager will take REST calls with desired client 
> configuration changes and call YarnConfigurationStore methods to store these 
> changes in the backing store.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6271) yarn rmadin -getGroups returns information from standby RM

2017-03-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15893543#comment-15893543
 ] 

Hadoop QA commented on YARN-6271:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 39m 51s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 61m 17s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.resourcemanager.TestRMRestart |
|   | hadoop.yarn.server.resourcemanager.TestResourceTrackerService |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-6271 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12855742/YARN-6271.1.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux bab792374304 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 8f4817f |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/15141/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/15141/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/15141/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> yarn rmadin -getGroups returns 

[jira] [Commented] (YARN-6271) yarn rmadin -getGroups returns information from standby RM

2017-03-02 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15893450#comment-15893450
 ] 

Junping Du commented on YARN-6271:
--

Patch LGTM. +1 pending on Jenkins.

> yarn rmadin -getGroups returns information from standby RM
> --
>
> Key: YARN-6271
> URL: https://issues.apache.org/jira/browse/YARN-6271
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Reporter: Sumana Sathish
>Assignee: Jian He
>Priority: Critical
> Attachments: YARN-6271.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6273) TestAMRMClient#testAllocationWithBlacklist fails intermittently

2017-03-02 Thread Ray Chiang (JIRA)
Ray Chiang created YARN-6273:


 Summary: TestAMRMClient#testAllocationWithBlacklist fails 
intermittently
 Key: YARN-6273
 URL: https://issues.apache.org/jira/browse/YARN-6273
 Project: Hadoop YARN
  Issue Type: Test
  Components: yarn
Affects Versions: 3.0.0-alpha2
Reporter: Ray Chiang


I'm seeing this unit test fail in trunk:

testAllocationWithBlacklist(org.apache.hadoop.yarn.client.api.impl.TestAMRMClient)
  Time elapsed: 0.738 sec  <<< FAILURE!
java.lang.AssertionError: expected:<2> but was:<1>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:555)
at org.junit.Assert.assertEquals(Assert.java:542)
at 
org.apache.hadoop.yarn.client.api.impl.TestAMRMClient.testAllocationWithBlacklist(TestAMRMClient.java:721)




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5956) Refactor ClientRMService

2017-03-02 Thread Kai Sasaki (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15893385#comment-15893385
 ] 

Kai Sasaki commented on YARN-5956:
--

[~sunilg] Sure, let me check findbugs and check style issues.

> Refactor ClientRMService
> 
>
> Key: YARN-5956
> URL: https://issues.apache.org/jira/browse/YARN-5956
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Affects Versions: 3.0.0-alpha2
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
>Priority: Minor
> Attachments: YARN-5956.01.patch, YARN-5956.02.patch, 
> YARN-5956.03.patch, YARN-5956.04.patch, YARN-5956.05.patch, 
> YARN-5956.06.patch, YARN-5956.07.patch, YARN-5956.08.patch, 
> YARN-5956.09.patch, YARN-5956.10.patch, YARN-5956.11.patch
>
>
> Some refactoring can be done in {{ClientRMService}}.
> - Remove redundant variable declaration
> - Fill in missing javadocs
> - Proper variable access modifier
> - Fix some typos in method name and exception messages



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6272) TestAMRMClient#testAMRMClientWithContainerResourceChange fails intermittently

2017-03-02 Thread Ray Chiang (JIRA)
Ray Chiang created YARN-6272:


 Summary: TestAMRMClient#testAMRMClientWithContainerResourceChange 
fails intermittently
 Key: YARN-6272
 URL: https://issues.apache.org/jira/browse/YARN-6272
 Project: Hadoop YARN
  Issue Type: Test
  Components: yarn
Affects Versions: 3.0.0-alpha3
Reporter: Ray Chiang


I'm seeing this unit test fail fairly often in trunk:

testAMRMClientWithContainerResourceChange(org.apache.hadoop.yarn.client.api.impl.TestAMRMClient)
  Time elapsed: 5.113 sec  <<< FAILURE!
java.lang.AssertionError: expected:<1> but was:<0>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:555)
at org.junit.Assert.assertEquals(Assert.java:542)
at 
org.apache.hadoop.yarn.client.api.impl.TestAMRMClient.doContainerResourceChange(TestAMRMClient.java:1087)
at 
org.apache.hadoop.yarn.client.api.impl.TestAMRMClient.testAMRMClientWithContainerResourceChange(TestAMRMClient.java:963)




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6271) yarn rmadin -getGroups returns information from standby RM

2017-03-02 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6271?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-6271:
--
Attachment: YARN-6271.1.patch

> yarn rmadin -getGroups returns information from standby RM
> --
>
> Key: YARN-6271
> URL: https://issues.apache.org/jira/browse/YARN-6271
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Reporter: Sumana Sathish
>Assignee: Jian He
>Priority: Critical
> Attachments: YARN-6271.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6271) yarn rmadin -getGroups returns information from standby RM

2017-03-02 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15893362#comment-15893362
 ] 

Jian He commented on YARN-6271:
---

thanks for filing this. This is because AdminService#getGroupsForUser does not 
check if RM is standby 

> yarn rmadin -getGroups returns information from standby RM
> --
>
> Key: YARN-6271
> URL: https://issues.apache.org/jira/browse/YARN-6271
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Reporter: Sumana Sathish
>Assignee: Jian He
>Priority: Critical
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6271) yarn rmadin -getGroups returns information from standby RM

2017-03-02 Thread Sumana Sathish (JIRA)
Sumana Sathish created YARN-6271:


 Summary: yarn rmadin -getGroups returns information from standby RM
 Key: YARN-6271
 URL: https://issues.apache.org/jira/browse/YARN-6271
 Project: Hadoop YARN
  Issue Type: Bug
  Components: yarn
Reporter: Sumana Sathish
Assignee: Jian He
Priority: Critical






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5948) Implement MutableConfigurationManager for handling storage into configuration store

2017-03-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15893356#comment-15893356
 ] 

Hadoop QA commented on YARN-5948:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
 4s{color} | {color:green} YARN-5734 passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  7m  
1s{color} | {color:red} hadoop-yarn in YARN-5734 failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
55s{color} | {color:green} YARN-5734 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
45s{color} | {color:green} YARN-5734 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
58s{color} | {color:green} YARN-5734 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
24s{color} | {color:green} YARN-5734 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
27s{color} | {color:green} YARN-5734 passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  5m 
45s{color} | {color:red} hadoop-yarn in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  5m 45s{color} 
| {color:red} hadoop-yarn in the patch failed. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 58s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 19 new + 325 unchanged - 0 fixed = 344 total (was 325) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
32s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
28s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 40m 16s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}100m 17s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairScheduler |
|   | hadoop.yarn.server.resourcemanager.TestRMRestart |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-5948 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12855714/YARN-5948-YARN-5734.003.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux d637d8143349 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | 

[jira] [Commented] (YARN-6249) TestFairSchedulerPreemption is inconsistently failing on trunk

2017-03-02 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15893220#comment-15893220
 ] 

Yufei Gu commented on YARN-6249:


BTW, [~Tao Jie], you can take this JIRA if you want to.

> TestFairSchedulerPreemption is inconsistently failing on trunk
> --
>
> Key: YARN-6249
> URL: https://issues.apache.org/jira/browse/YARN-6249
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler, resourcemanager
>Affects Versions: 2.9.0
>Reporter: Sean Po
>Assignee: Yufei Gu
> Attachments: YARN-6249.001.patch
>
>
> Tests in TestFairSchedulerPreemption.java will inconsistently fail on trunk. 
> An example stack trace: 
> {noformat}
> Tests run: 24, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 24.879 sec 
> <<< FAILURE! - in 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption
> testPreemptionSelectNonAMContainer[MinSharePreemptionWithDRF](org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption)
>   Time elapsed: 10.475 sec  <<< FAILURE!
> java.lang.AssertionError: Incorrect number of containers on the greedy app 
> expected:<4> but was:<8>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption.verifyPreemption(TestFairSchedulerPreemption.java:288)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption.testPreemptionSelectNonAMContainer(TestFairSchedulerPreemption.java:363)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5948) Implement MutableConfigurationManager for handling storage into configuration store

2017-03-02 Thread Jonathan Hung (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15893171#comment-15893171
 ] 

Jonathan Hung commented on YARN-5948:
-

003 fixes TestYarnConfigurationFields test failure, the others don't seem 
related. Also fix javadoc issue.

> Implement MutableConfigurationManager for handling storage into configuration 
> store
> ---
>
> Key: YARN-5948
> URL: https://issues.apache.org/jira/browse/YARN-5948
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
> Attachments: YARN-5948.001.patch, YARN-5948-YARN-5734.002.patch, 
> YARN-5948-YARN-5734.003.patch
>
>
> The MutableConfigurationManager will take REST calls with desired client 
> configuration changes and call YarnConfigurationStore methods to store these 
> changes in the backing store.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5948) Implement MutableConfigurationManager for handling storage into configuration store

2017-03-02 Thread Jonathan Hung (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5948?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hung updated YARN-5948:

Attachment: (was: YARN-5948-YARN-5734.003.patch)

> Implement MutableConfigurationManager for handling storage into configuration 
> store
> ---
>
> Key: YARN-5948
> URL: https://issues.apache.org/jira/browse/YARN-5948
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
> Attachments: YARN-5948.001.patch, YARN-5948-YARN-5734.002.patch, 
> YARN-5948-YARN-5734.003.patch
>
>
> The MutableConfigurationManager will take REST calls with desired client 
> configuration changes and call YarnConfigurationStore methods to store these 
> changes in the backing store.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5948) Implement MutableConfigurationManager for handling storage into configuration store

2017-03-02 Thread Jonathan Hung (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5948?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hung updated YARN-5948:

Attachment: YARN-5948-YARN-5734.003.patch

> Implement MutableConfigurationManager for handling storage into configuration 
> store
> ---
>
> Key: YARN-5948
> URL: https://issues.apache.org/jira/browse/YARN-5948
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
> Attachments: YARN-5948.001.patch, YARN-5948-YARN-5734.002.patch, 
> YARN-5948-YARN-5734.003.patch
>
>
> The MutableConfigurationManager will take REST calls with desired client 
> configuration changes and call YarnConfigurationStore methods to store these 
> changes in the backing store.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5948) Implement MutableConfigurationManager for handling storage into configuration store

2017-03-02 Thread Jonathan Hung (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5948?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hung updated YARN-5948:

Attachment: YARN-5948-YARN-5734.003.patch

> Implement MutableConfigurationManager for handling storage into configuration 
> store
> ---
>
> Key: YARN-5948
> URL: https://issues.apache.org/jira/browse/YARN-5948
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
> Attachments: YARN-5948.001.patch, YARN-5948-YARN-5734.002.patch, 
> YARN-5948-YARN-5734.003.patch
>
>
> The MutableConfigurationManager will take REST calls with desired client 
> configuration changes and call YarnConfigurationStore methods to store these 
> changes in the backing store.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6270) WebUtils.getRMWebAppURLWithScheme() needs to honor RM HA setting

2017-03-02 Thread Xuan Gong (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6270?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuan Gong updated YARN-6270:

Attachment: YARN-6270.1.patch

> WebUtils.getRMWebAppURLWithScheme() needs to honor RM HA setting
> 
>
> Key: YARN-6270
> URL: https://issues.apache.org/jira/browse/YARN-6270
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Sumana Sathish
>Assignee: Xuan Gong
> Attachments: YARN-6270.1.patch
>
>
> yarn log cli: yarn logs -applicationId application_1488441635386_0005 -am 1 
> failed with the connection exception when HA is enabled
> {code}
> Unable to get AM container informations for the 
> application:application_1488441635386_0005
> java.net.ConnectException: Connection refused (Connection refused)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6270) WebUtils.getRMWebAppURLWithScheme() needs to honor RM HA setting

2017-03-02 Thread Xuan Gong (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6270?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuan Gong updated YARN-6270:

Description: 
yarn log cli: yarn logs -applicationId application_1488441635386_0005 -am 1 
failed with the connection exception when HA is enabled
{code}
Unable to get AM container informations for the 
application:application_1488441635386_0005
java.net.ConnectException: Connection refused (Connection refused)
{code}

> WebUtils.getRMWebAppURLWithScheme() needs to honor RM HA setting
> 
>
> Key: YARN-6270
> URL: https://issues.apache.org/jira/browse/YARN-6270
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Sumana Sathish
>Assignee: Xuan Gong
>
> yarn log cli: yarn logs -applicationId application_1488441635386_0005 -am 1 
> failed with the connection exception when HA is enabled
> {code}
> Unable to get AM container informations for the 
> application:application_1488441635386_0005
> java.net.ConnectException: Connection refused (Connection refused)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6270) WebUtils.getRMWebAppURLWithScheme() needs to honor RM HA setting

2017-03-02 Thread Xuan Gong (JIRA)
Xuan Gong created YARN-6270:
---

 Summary: WebUtils.getRMWebAppURLWithScheme() needs to honor RM HA 
setting
 Key: YARN-6270
 URL: https://issues.apache.org/jira/browse/YARN-6270
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Sumana Sathish
Assignee: Xuan Gong






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6249) TestFairSchedulerPreemption is inconsistently failing on trunk

2017-03-02 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15893130#comment-15893130
 ] 

Yufei Gu commented on YARN-6249:


[~Tao Jie], thanks for debugging this. Good catch. For your patch, adding an 
{{scheduler.update()}} after scheduling action is a good practice, which is in 
line with [~miklos.szeg...@cloudera.com]'s change in YARN-6218, but a little 
bit indirect to the issue. Would it make sense to initialize control clock 
before set it to scheduler like this? 
{code}
clock.setTime(SystemClock.getInstance().getTime());
scheduler.setClock(clock);
{code}



> TestFairSchedulerPreemption is inconsistently failing on trunk
> --
>
> Key: YARN-6249
> URL: https://issues.apache.org/jira/browse/YARN-6249
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler, resourcemanager
>Affects Versions: 2.9.0
>Reporter: Sean Po
>Assignee: Yufei Gu
> Attachments: YARN-6249.001.patch
>
>
> Tests in TestFairSchedulerPreemption.java will inconsistently fail on trunk. 
> An example stack trace: 
> {noformat}
> Tests run: 24, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 24.879 sec 
> <<< FAILURE! - in 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption
> testPreemptionSelectNonAMContainer[MinSharePreemptionWithDRF](org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption)
>   Time elapsed: 10.475 sec  <<< FAILURE!
> java.lang.AssertionError: Incorrect number of containers on the greedy app 
> expected:<4> but was:<8>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption.verifyPreemption(TestFairSchedulerPreemption.java:288)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption.testPreemptionSelectNonAMContainer(TestFairSchedulerPreemption.java:363)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6265) yarn.resourcemanager.fail-fast is used inconsistently

2017-03-02 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6265?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15893101#comment-15893101
 ] 

Junping Du commented on YARN-6265:
--

Agree. IIRC, the later one - state store operation failure cause RM fail fast 
is this configuration get designed for. cc [~kasha], [~jianhe]. This is to 
control system level risk for the whole cluster. For app w/o valid queue get 
submitted, we should have other configuration to identify the expected behavior.

> yarn.resourcemanager.fail-fast is used inconsistently
> -
>
> Key: YARN-6265
> URL: https://issues.apache.org/jira/browse/YARN-6265
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.8.0
>Reporter: Daniel Templeton
>
> In capacity scheduler, the property is used to control whether an app with 
> no/bad queue should be killed.  In the state store, the property controls 
> whether a state store op failure should cause the RM to exit in non-HA mode.  
> Those are two very different things, and they should be separated.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5948) Implement MutableConfigurationManager for handling storage into configuration store

2017-03-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15893099#comment-15893099
 ] 

Hadoop QA commented on YARN-5948:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
50s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
56s{color} | {color:green} YARN-5734 passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  6m 
48s{color} | {color:red} hadoop-yarn in YARN-5734 failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
54s{color} | {color:green} YARN-5734 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
8s{color} | {color:green} YARN-5734 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
37s{color} | {color:green} YARN-5734 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
13s{color} | {color:green} YARN-5734 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} YARN-5734 passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  5m 
42s{color} | {color:red} hadoop-yarn in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  5m 42s{color} 
| {color:red} hadoop-yarn in the patch failed. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 54s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 20 new + 325 unchanged - 0 fixed = 345 total (was 325) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
29s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
26s{color} | {color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager
 generated 1 new + 908 unchanged - 0 fixed = 909 total (was 908) {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 34s{color} 
| {color:red} hadoop-yarn-api in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 39m 32s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 88m 21s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.conf.TestYarnConfigurationFields |
|   | hadoop.yarn.server.resourcemanager.TestRMRestart |
|   | hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer |
|   | hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairScheduler |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-5948 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12855687/YARN-5948-YARN-5734.002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 6d1e8170c39f 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | YARN-5734 / c12bed6 |
| Default Java | 1.8.0_121 |
| compile 

[jira] [Commented] (YARN-6254) Provide a mechanism to whitelist the RM REST API clients

2017-03-02 Thread Benoy Antony (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6254?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15892954#comment-15892954
 ] 

Benoy Antony commented on YARN-6254:


I already have a jira with a patch for this capability via HADOOP-10679. 
Linking these two jiras.

> Provide a mechanism to whitelist the RM REST API clients
> 
>
> Key: YARN-6254
> URL: https://issues.apache.org/jira/browse/YARN-6254
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: resourcemanager
>Affects Versions: 2.7.1
>Reporter: Aroop Maliakkal
>
> Currently RM REST APIs are open to everyone. Can we provide a whitelist 
> feature so that we can control what frequency and what hosts can hit the RM 
> REST APIs ?
> Thanks,
> /Aroop



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-6256) Add FROM_ID info key for timeline entities in reader response.

2017-03-02 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15892935#comment-15892935
 ] 

Sangjin Lee edited comment on YARN-6256 at 3/2/17 8:35 PM:
---

Thanks for the patch [~rohithsharma]! I took a quick look at it, and I still 
need to do a more thorough review, but my observations so far are similar to 
Varun's above.

I do have another question (which also applies to YARN-6027). It seems that 
we're embedding the FROMID value for every single entity we return. Would it be 
possible to do this only for the "last" entity that gets returned if we're 
returning multiple entities? This might be a secondary optimization, but for 
all other entities but the last, the FROMID value would be ignored. In that 
sense, it would simply add to the payload size without providing benefits. 
Thoughts?


was (Author: sjlee0):
Thanks for the patch [~rohithsharma]! I took a quick look at it, and I still 
need to do a more thorough review, but my observations so far are similar to 
Varun's above.

I do have another questions (which also applies to YARN-6027). It seems that 
we're embedding the FROMID value for every single entity we return. Would it be 
possible to do this only for the "last" entity that gets returned if we're 
returning multiple entities? This might be a secondary optimization, but for 
all other entities but the last, the FROMID value would be ignored. In that 
sense, it would simply add to the payload size without providing benefits. 
Thoughts?

> Add FROM_ID info key for timeline entities in reader response. 
> ---
>
> Key: YARN-6256
> URL: https://issues.apache.org/jira/browse/YARN-6256
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>  Labels: yarn-5355-merge-blocker
> Attachments: YARN-6256-YARN-5355.0001.patch
>
>
> It is continuation with YARN-6027 to add FROM_ID key in all other timeline 
> entity responses which includes
> # Flow run entity response. 
> # Application entity response
> # Generic timeline entity response - Here we need to retrospect on idprefix 
> filter which is now separately provided. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6218) TestAMRMClient fails with fair scheduler

2017-03-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15892941#comment-15892941
 ] 

Hadoop QA commented on YARN-6218:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
53s{color} | {color:green} hadoop-yarn-project/hadoop-yarn: The patch generated 
0 new + 80 unchanged - 14 fixed = 80 total (was 94) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 39m 30s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 19m  
5s{color} | {color:green} hadoop-yarn-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}107m 19s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.resourcemanager.TestRMRestart |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-6218 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12855665/YARN-6218.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux ed01d947a8a4 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / eeca8b0 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/15136/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/15136/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 

[jira] [Commented] (YARN-6256) Add FROM_ID info key for timeline entities in reader response.

2017-03-02 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15892935#comment-15892935
 ] 

Sangjin Lee commented on YARN-6256:
---

Thanks for the patch [~rohithsharma]! I took a quick look at it, and I still 
need to do a more thorough review, but my observations so far are similar to 
Varun's above.

I do have another questions (which also applies to YARN-6027). It seems that 
we're embedding the FROMID value for every single entity we return. Would it be 
possible to do this only for the "last" entity that gets returned if we're 
returning multiple entities? This might be a secondary optimization, but for 
all other entities but the last, the FROMID value would be ignored. In that 
sense, it would simply add to the payload size without providing benefits. 
Thoughts?

> Add FROM_ID info key for timeline entities in reader response. 
> ---
>
> Key: YARN-6256
> URL: https://issues.apache.org/jira/browse/YARN-6256
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>  Labels: yarn-5355-merge-blocker
> Attachments: YARN-6256-YARN-5355.0001.patch
>
>
> It is continuation with YARN-6027 to add FROM_ID key in all other timeline 
> entity responses which includes
> # Flow run entity response. 
> # Application entity response
> # Generic timeline entity response - Here we need to retrospect on idprefix 
> filter which is now separately provided. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5948) Implement MutableConfigurationManager for handling storage into configuration store

2017-03-02 Thread Jonathan Hung (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15892909#comment-15892909
 ] 

Jonathan Hung commented on YARN-5948:
-

Attached 002 patch, with a CSConfigurationProvider implementation which allows 
mutating configuration via YarnConfigurationStore.

> Implement MutableConfigurationManager for handling storage into configuration 
> store
> ---
>
> Key: YARN-5948
> URL: https://issues.apache.org/jira/browse/YARN-5948
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
> Attachments: YARN-5948.001.patch, YARN-5948-YARN-5734.002.patch
>
>
> The MutableConfigurationManager will take REST calls with desired client 
> configuration changes and call YarnConfigurationStore methods to store these 
> changes in the backing store.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5948) Implement MutableConfigurationManager for handling storage into configuration store

2017-03-02 Thread Jonathan Hung (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5948?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hung updated YARN-5948:

Attachment: YARN-5948-YARN-5734.002.patch

> Implement MutableConfigurationManager for handling storage into configuration 
> store
> ---
>
> Key: YARN-5948
> URL: https://issues.apache.org/jira/browse/YARN-5948
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
> Attachments: YARN-5948.001.patch, YARN-5948-YARN-5734.002.patch
>
>
> The MutableConfigurationManager will take REST calls with desired client 
> configuration changes and call YarnConfigurationStore methods to store these 
> changes in the backing store.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6153) keepContainer does not work when AM retry window is set

2017-03-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15892892#comment-15892892
 ] 

Hadoop QA commented on YARN-6153:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 0 new + 50 unchanged - 1 fixed = 50 total (was 51) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 40m 
10s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 62m 35s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-6153 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12855626/YARN-6153.006-1.patch 
|
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux a78e5cd9fba3 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 747bafa |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/15137/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/15137/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> keepContainer does not work when AM retry window is set
> ---
>
> Key: YARN-6153
> URL: https://issues.apache.org/jira/browse/YARN-6153
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.7.1
>

[jira] [Commented] (YARN-6218) TestAMRMClient fails with fair scheduler

2017-03-02 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15892866#comment-15892866
 ] 

Yufei Gu commented on YARN-6218:


+1 (non-binding)

> TestAMRMClient fails with fair scheduler
> 
>
> Key: YARN-6218
> URL: https://issues.apache.org/jira/browse/YARN-6218
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Miklos Szegedi
>Assignee: Miklos Szegedi
>Priority: Minor
> Attachments: YARN-6218.000.patch, YARN-6218.001.patch, 
> YARN-6218.002.patch
>
>
> We ran into this issue on v2. Allocation does not happen in the specified 
> amount of time.
> Error Message
> expected:<2> but was:<0>
> Stacktrace
> java.lang.AssertionError: expected:<2> but was:<0>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TestAMRMClient.testAMRMClientMatchStorage(TestAMRMClient.java:495)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6153) keepContainer does not work when AM retry window is set

2017-03-02 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15892826#comment-15892826
 ] 

Jian He commented on YARN-6153:
---

[~kyungwan nam], thanks for investigation. I missed this.  instead of changing 
it back to hard coded sleep, we should continue to use system clock.  
Hard-coded sleep is bad because it prolongs the test's execution and also 
sometimes indeterministic. 
So let's revert the change of moving this code to RMAppAttempt, so that we can 
continue to use system clock. 
{code}
-if (this.attemptFailuresValidityInterval <= 0
-|| (attempt.getFinishTime() > endTime
-- this.attemptFailuresValidityInterval)) {
-  completedAttempts++;
-}
+com
{code}

> keepContainer does not work when AM retry window is set
> ---
>
> Key: YARN-6153
> URL: https://issues.apache.org/jira/browse/YARN-6153
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.7.1
>Reporter: kyungwan nam
>Assignee: kyungwan nam
> Fix For: 2.8.0, 3.0.0-alpha3
>
> Attachments: YARN-6153.001.patch, YARN-6153.002.patch, 
> YARN-6153.003.patch, YARN-6153.004.patch, YARN-6153.005.patch, 
> YARN-6153.006-1.patch, YARN-6153.006.patch, YARN-6153-branch-2.8.patch
>
>
> yarn.resourcemanager.am.max-attempts has been configured to 2 in my cluster.
> I submitted a YARN application (slider app) that keepContainers=true, 
> attemptFailuresValidityInterval=30.
> it did work properly when AM was failed firstly.
> all containers launched by previous AM were resynced with new AM (attempt2) 
> without killing containers.
> after 10 minutes, I thought AM failure count was reset by 
> attemptFailuresValidityInterval (5 minutes).
> but, all containers were killed when AM was failed secondly. (new AM attempt3 
> was launched properly)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (YARN-3471) Fix timeline client retry

2017-03-02 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3471?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen updated YARN-3471:
-
Comment: was deleted

(was: Client retry is done  in two places today after YARN-4675:
1)  In TimelineV2ClientImpl
{code:java}
  protected void putObjects(String path, MultivaluedMap params,
  Object obj) throws IOException, YarnException {

int retries = verifyRestEndPointAvailable();

// timelineServiceAddress could be stale, add retry logic here.
boolean needRetry = true;
while (needRetry) {
  try {
URI uri = TimelineConnector.constructResURI(getConfig(),
timelineServiceAddress, RESOURCE_URI_STR_V2);
putObjects(uri, path, params, obj);
needRetry = false;
  } catch (IOException e) {
// handle exception for timelineServiceAddress being updated.
checkRetryWithSleep(retries, e);
retries--;
  }
}
  }
{code}
The client will retry upon IOExceptions thrown by  putObjects(uri, path, 
params, obj);

2) As as a ClientFilter of the Jersey client in TimelineConnector, namely, 
TimelineJerseyRetryFilter.  Requests are only retried upon connection 
exceptions. It already uses TimelineClientConnectionRetry logic

I think 1) is redundant given 2) has taken care of our retry cases. )

> Fix timeline client retry
> -
>
> Key: YARN-3471
> URL: https://issues.apache.org/jira/browse/YARN-3471
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: 2.8.0
>Reporter: Zhijie Shen
>Assignee: Haibo Chen
>  Labels: YARN-5355
> Attachments: YARN-3471.1.patch, YARN-3471.2.patch
>
>
> I found that the client retry has some problems:
> 1. The new put methods will retry on all exception, but they should only do 
> it upon ConnectException.
> 2. We can reuse TimelineClientConnectionRetry to simplify the retry logic.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-3471) Fix timeline client retry

2017-03-02 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3471?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15892774#comment-15892774
 ] 

Haibo Chen edited comment on YARN-3471 at 3/2/17 6:49 PM:
--

Client retry is done  in two places today after YARN-4675:
1)  In TimelineV2ClientImpl
{code:java}
  protected void putObjects(String path, MultivaluedMap params,
  Object obj) throws IOException, YarnException {

int retries = verifyRestEndPointAvailable();

// timelineServiceAddress could be stale, add retry logic here.
boolean needRetry = true;
while (needRetry) {
  try {
URI uri = TimelineConnector.constructResURI(getConfig(),
timelineServiceAddress, RESOURCE_URI_STR_V2);
putObjects(uri, path, params, obj);
needRetry = false;
  } catch (IOException e) {
// handle exception for timelineServiceAddress being updated.
checkRetryWithSleep(retries, e);
retries--;
  }
}
  }
{code}
The client will retry upon IOExceptions thrown by  putObjects(uri, path, 
params, obj);

2) As as a ClientFilter of the Jersey client in TimelineConnector, namely, 
TimelineJerseyRetryFilter.  Requests are only retried upon connection 
exceptions. It already uses TimelineClientConnectionRetry logic

I think 1) is redundant given 2) has taken care of our retry cases. 


was (Author: haibochen):
Client retry is done  in two places today after YARN-4675:
1)  In TimelineV2ClientImpl
{code:java}
  protected void putObjects(String path, MultivaluedMap params,
  Object obj) throws IOException, YarnException {

int retries = verifyRestEndPointAvailable();

// timelineServiceAddress could be stale, add retry logic here.
boolean needRetry = true;
while (needRetry) {
  try {
URI uri = TimelineConnector.constructResURI(getConfig(),
timelineServiceAddress, RESOURCE_URI_STR_V2);
putObjects(uri, path, params, obj);
needRetry = false;
  } catch (IOException e) {
// handle exception for timelineServiceAddress being updated.
checkRetryWithSleep(retries, e);
retries--;
  }
}
  }
{code}
The client will retry upon IOExceptions thrown by  putObjects(uri, path, 
params, obj);

2) As as a ClientFilter of the Jersey client in TimelineConnector, namely, 
TimelineJerseyRetryFilter.  Requests are only retried upon connection 
exceptions.

I think 1) is redundant given 2) has taken care of our retry cases. 

> Fix timeline client retry
> -
>
> Key: YARN-3471
> URL: https://issues.apache.org/jira/browse/YARN-3471
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: 2.8.0
>Reporter: Zhijie Shen
>Assignee: Haibo Chen
>  Labels: YARN-5355
> Attachments: YARN-3471.1.patch, YARN-3471.2.patch
>
>
> I found that the client retry has some problems:
> 1. The new put methods will retry on all exception, but they should only do 
> it upon ConnectException.
> 2. We can reuse TimelineClientConnectionRetry to simplify the retry logic.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3471) Fix timeline client retry

2017-03-02 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3471?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15892774#comment-15892774
 ] 

Haibo Chen commented on YARN-3471:
--

Client retry is done  in two places today after YARN-4675:
1)  In TimelineV2ClientImpl
{code:java}
  protected void putObjects(String path, MultivaluedMap params,
  Object obj) throws IOException, YarnException {

int retries = verifyRestEndPointAvailable();

// timelineServiceAddress could be stale, add retry logic here.
boolean needRetry = true;
while (needRetry) {
  try {
URI uri = TimelineConnector.constructResURI(getConfig(),
timelineServiceAddress, RESOURCE_URI_STR_V2);
putObjects(uri, path, params, obj);
needRetry = false;
  } catch (IOException e) {
// handle exception for timelineServiceAddress being updated.
checkRetryWithSleep(retries, e);
retries--;
  }
}
  }
{code}
The client will retry upon IOExceptions thrown by  putObjects(uri, path, 
params, obj);

2) As as a ClientFilter of the Jersey client in TimelineConnector, namely, 
TimelineJerseyRetryFilter.  Requests are only retried upon connection 
exceptions.

I think 1) is redundant given 2) has taken care of our retry cases. 

> Fix timeline client retry
> -
>
> Key: YARN-3471
> URL: https://issues.apache.org/jira/browse/YARN-3471
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: 2.8.0
>Reporter: Zhijie Shen
>Assignee: Haibo Chen
>  Labels: YARN-5355
> Attachments: YARN-3471.1.patch, YARN-3471.2.patch
>
>
> I found that the client retry has some problems:
> 1. The new put methods will retry on all exception, but they should only do 
> it upon ConnectException.
> 2. We can reuse TimelineClientConnectionRetry to simplify the retry logic.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6218) TestAMRMClient fails with fair scheduler

2017-03-02 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15892763#comment-15892763
 ] 

Miklos Szegedi commented on YARN-6218:
--

Thank you, [~yufeigu] for the comments, I updated the patch.

> TestAMRMClient fails with fair scheduler
> 
>
> Key: YARN-6218
> URL: https://issues.apache.org/jira/browse/YARN-6218
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Miklos Szegedi
>Assignee: Miklos Szegedi
>Priority: Minor
> Attachments: YARN-6218.000.patch, YARN-6218.001.patch, 
> YARN-6218.002.patch
>
>
> We ran into this issue on v2. Allocation does not happen in the specified 
> amount of time.
> Error Message
> expected:<2> but was:<0>
> Stacktrace
> java.lang.AssertionError: expected:<2> but was:<0>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TestAMRMClient.testAMRMClientMatchStorage(TestAMRMClient.java:495)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6218) TestAMRMClient fails with fair scheduler

2017-03-02 Thread Miklos Szegedi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6218?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Szegedi updated YARN-6218:
-
Attachment: YARN-6218.002.patch

> TestAMRMClient fails with fair scheduler
> 
>
> Key: YARN-6218
> URL: https://issues.apache.org/jira/browse/YARN-6218
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Miklos Szegedi
>Assignee: Miklos Szegedi
>Priority: Minor
> Attachments: YARN-6218.000.patch, YARN-6218.001.patch, 
> YARN-6218.002.patch
>
>
> We ran into this issue on v2. Allocation does not happen in the specified 
> amount of time.
> Error Message
> expected:<2> but was:<0>
> Stacktrace
> java.lang.AssertionError: expected:<2> but was:<0>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TestAMRMClient.testAMRMClientMatchStorage(TestAMRMClient.java:495)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-3471) Fix timeline client retry

2017-03-02 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3471?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen reassigned YARN-3471:


Assignee: Haibo Chen  (was: Varun Saxena)

> Fix timeline client retry
> -
>
> Key: YARN-3471
> URL: https://issues.apache.org/jira/browse/YARN-3471
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: 2.8.0
>Reporter: Zhijie Shen
>Assignee: Haibo Chen
>  Labels: YARN-5355
> Attachments: YARN-3471.1.patch, YARN-3471.2.patch
>
>
> I found that the client retry has some problems:
> 1. The new put methods will retry on all exception, but they should only do 
> it upon ConnectException.
> 2. We can reuse TimelineClientConnectionRetry to simplify the retry logic.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3471) Fix timeline client retry

2017-03-02 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3471?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15892725#comment-15892725
 ] 

Haibo Chen commented on YARN-3471:
--

Assigning this to myself, per offline discussion in weekly call

> Fix timeline client retry
> -
>
> Key: YARN-3471
> URL: https://issues.apache.org/jira/browse/YARN-3471
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: 2.8.0
>Reporter: Zhijie Shen
>Assignee: Varun Saxena
>  Labels: YARN-5355
> Attachments: YARN-3471.1.patch, YARN-3471.2.patch
>
>
> I found that the client retry has some problems:
> 1. The new put methods will retry on all exception, but they should only do 
> it upon ConnectException.
> 2. We can reuse TimelineClientConnectionRetry to simplify the retry logic.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6263) NMTokenSecretManagerInRM.createAndGetNMToken is not thread safe

2017-03-02 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6263?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen updated YARN-6263:
-
Description: NMTokenSecretManagerInRM.createAndGetNMToken modifies values 
of a ConcurrentHashMap, which are of type HashSet, but it only acquires read 
lock.  (was: NMTokenSecretManagerInRM.createAndGetNMToken modifies values of a 
ConcurrentHashMap, which are of type HashTable, but it only acquires read lock.)

> NMTokenSecretManagerInRM.createAndGetNMToken is not thread safe
> ---
>
> Key: YARN-6263
> URL: https://issues.apache.org/jira/browse/YARN-6263
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Affects Versions: 3.0.0-alpha2
>Reporter: Haibo Chen
>Assignee: Haibo Chen
> Attachments: YARN-6263.01.patch
>
>
> NMTokenSecretManagerInRM.createAndGetNMToken modifies values of a 
> ConcurrentHashMap, which are of type HashSet, but it only acquires read lock.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6263) NMTokenSecretManagerInRM.createAndGetNMToken is not thread safe

2017-03-02 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15892712#comment-15892712
 ] 

Haibo Chen commented on YARN-6263:
--

Thanks [~jlowe] for the quick review!

> NMTokenSecretManagerInRM.createAndGetNMToken is not thread safe
> ---
>
> Key: YARN-6263
> URL: https://issues.apache.org/jira/browse/YARN-6263
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Affects Versions: 3.0.0-alpha2
>Reporter: Haibo Chen
>Assignee: Haibo Chen
> Attachments: YARN-6263.01.patch
>
>
> NMTokenSecretManagerInRM.createAndGetNMToken modifies values of a 
> ConcurrentHashMap, which are of type HashTable, but it only acquires read 
> lock.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6249) TestFairSchedulerPreemption is inconsistently failing on trunk

2017-03-02 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15892705#comment-15892705
 ] 

Miklos Szegedi commented on YARN-6249:
--

Thank you, [~Tao Jie] for the debugging and the patch. It makes sense, however 
should not we take care of the updates in function 
sendEnoughNodeUpdatesToAssignFully after the node updates similar to YARN-6218?

> TestFairSchedulerPreemption is inconsistently failing on trunk
> --
>
> Key: YARN-6249
> URL: https://issues.apache.org/jira/browse/YARN-6249
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler, resourcemanager
>Affects Versions: 2.9.0
>Reporter: Sean Po
>Assignee: Yufei Gu
> Attachments: YARN-6249.001.patch
>
>
> Tests in TestFairSchedulerPreemption.java will inconsistently fail on trunk. 
> An example stack trace: 
> {noformat}
> Tests run: 24, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 24.879 sec 
> <<< FAILURE! - in 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption
> testPreemptionSelectNonAMContainer[MinSharePreemptionWithDRF](org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption)
>   Time elapsed: 10.475 sec  <<< FAILURE!
> java.lang.AssertionError: Incorrect number of containers on the greedy app 
> expected:<4> but was:<8>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption.verifyPreemption(TestFairSchedulerPreemption.java:288)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption.testPreemptionSelectNonAMContainer(TestFairSchedulerPreemption.java:363)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6263) NMTokenSecretManagerInRM.createAndGetNMToken is not thread safe

2017-03-02 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15892647#comment-15892647
 ] 

Jason Lowe commented on YARN-6263:
--

+1 lgtm.  The unit test failures do not appear to be related, and the tests 
pass locally for me with the patch applied.  I'll commit this tomorrow if there 
are no objections.

> NMTokenSecretManagerInRM.createAndGetNMToken is not thread safe
> ---
>
> Key: YARN-6263
> URL: https://issues.apache.org/jira/browse/YARN-6263
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Affects Versions: 3.0.0-alpha2
>Reporter: Haibo Chen
>Assignee: Haibo Chen
> Attachments: YARN-6263.01.patch
>
>
> NMTokenSecretManagerInRM.createAndGetNMToken modifies values of a 
> ConcurrentHashMap, which are of type HashTable, but it only acquires read 
> lock.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6269) Pull into native services SLIDER-1185 - container/application diagnostics for enhanced debugging

2017-03-02 Thread Gour Saha (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6269?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gour Saha updated YARN-6269:

Fix Version/s: yarn-native-services

> Pull into native services SLIDER-1185 - container/application diagnostics for 
> enhanced debugging
> 
>
> Key: YARN-6269
> URL: https://issues.apache.org/jira/browse/YARN-6269
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gour Saha
> Fix For: yarn-native-services
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6269) Pull into native services SLIDER-1185 - container/application diagnostics for enhanced debugging

2017-03-02 Thread Gour Saha (JIRA)
Gour Saha created YARN-6269:
---

 Summary: Pull into native services SLIDER-1185 - 
container/application diagnostics for enhanced debugging
 Key: YARN-6269
 URL: https://issues.apache.org/jira/browse/YARN-6269
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Gour Saha






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5179) Issue of CPU usage of containers

2017-03-02 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15892636#comment-15892636
 ] 

Arun Suresh commented on YARN-5179:
---

Thanks for the clarification, and apologies for the delay..
I am in favor of approach 1. [~kasha]/[~kkaranasos], thoughts ?

> Issue of CPU usage of containers
> 
>
> Key: YARN-5179
> URL: https://issues.apache.org/jira/browse/YARN-5179
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.7.0
> Environment: Both on Windows and Linux
>Reporter: Zhongkai Mi
>
> // Multiply by 1000 to avoid losing data when converting to int 
>int milliVcoresUsed = (int) (cpuUsageTotalCoresPercentage * 1000 
>   * maxVCoresAllottedForContainers /nodeCpuPercentageForYARN); 
> This formula will not get right CPU usage based vcore if vcores != physical 
> cores. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5179) Issue of CPU usage of containers

2017-03-02 Thread Manikandan R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15892624#comment-15892624
 ] 

Manikandan R commented on YARN-5179:


[~asuresh]

I've updated my earlier comment with more details and described the possible 
solutions. Can you please take a look and provide your inputs?

Thanks

> Issue of CPU usage of containers
> 
>
> Key: YARN-5179
> URL: https://issues.apache.org/jira/browse/YARN-5179
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.7.0
> Environment: Both on Windows and Linux
>Reporter: Zhongkai Mi
>
> // Multiply by 1000 to avoid losing data when converting to int 
>int milliVcoresUsed = (int) (cpuUsageTotalCoresPercentage * 1000 
>   * maxVCoresAllottedForContainers /nodeCpuPercentageForYARN); 
> This formula will not get right CPU usage based vcore if vcores != physical 
> cores. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6256) Add FROM_ID info key for timeline entities in reader response.

2017-03-02 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15892530#comment-15892530
 ] 

Varun Saxena commented on YARN-6256:


Thanks [~rohithsharma] for the patch. This should be quite straightforward 
considering it follows similar approach as YARN-6027.
Code wise in general it looks fine.
I am in agreement with removing fromIdPrefix filter as I do not see a concrete 
use case for fetching entities by fromIdPrefix.

Few minor comments.
# TimelineEntityFilters class javadoc no longer needs to have javadoc for 
fromIdPrefix. Also fromId javadoc needs to be changed appropriately.
# In the javadoc over FlowRunRowKey#getRowKeyAsString we mention its inverted 
flow run id. It is infact the correct flow run id.
# For javadocs over getRowKeyAsString methods in *RowKey classes, I would 
rather say "Given the encoded row key as string" instead of "Given the raw row 
key as string".
# Test failures are related.
# Most of the checkstyle issues can be handled as well

> Add FROM_ID info key for timeline entities in reader response. 
> ---
>
> Key: YARN-6256
> URL: https://issues.apache.org/jira/browse/YARN-6256
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>  Labels: yarn-5355-merge-blocker
> Attachments: YARN-6256-YARN-5355.0001.patch
>
>
> It is continuation with YARN-6027 to add FROM_ID key in all other timeline 
> entity responses which includes
> # Flow run entity response. 
> # Application entity response
> # Generic timeline entity response - Here we need to retrospect on idprefix 
> filter which is now separately provided. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6266) Extend the resource class to support ports management

2017-03-02 Thread Lei Guo (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15892438#comment-15892438
 ] 

Lei Guo commented on YARN-6266:
---

For this specific use case, it may more make sense to align with anti-affinity 
scheduling.

> Extend the resource class to support ports management
> -
>
> Key: YARN-6266
> URL: https://issues.apache.org/jira/browse/YARN-6266
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: jialei weng
>
> Just like the vcores and memory, ports is an important resource for job to 
> allocate. We should add the ports management logic to yarn. It can support 
> the user to allocate two jobs(with same port requirement) to different 
> machines. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5956) Refactor ClientRMService

2017-03-02 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15892422#comment-15892422
 ] 

Sunil G commented on YARN-5956:
---

Thanks [~lewuathe].
Generally looks fine. Do you mind checking whether findbugs are valid or not?

> Refactor ClientRMService
> 
>
> Key: YARN-5956
> URL: https://issues.apache.org/jira/browse/YARN-5956
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Affects Versions: 3.0.0-alpha2
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
>Priority: Minor
> Attachments: YARN-5956.01.patch, YARN-5956.02.patch, 
> YARN-5956.03.patch, YARN-5956.04.patch, YARN-5956.05.patch, 
> YARN-5956.06.patch, YARN-5956.07.patch, YARN-5956.08.patch, 
> YARN-5956.09.patch, YARN-5956.10.patch, YARN-5956.11.patch
>
>
> Some refactoring can be done in {{ClientRMService}}.
> - Remove redundant variable declaration
> - Fill in missing javadocs
> - Proper variable access modifier
> - Fix some typos in method name and exception messages



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5148) [YARN-3368] Add page to new YARN UI to view server side configurations/logs/JVM-metrics

2017-03-02 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15892419#comment-15892419
 ] 

Sunil G commented on YARN-5148:
---

Yes, Sure. I will help to test and review now.

> [YARN-3368] Add page to new YARN UI to view server side 
> configurations/logs/JVM-metrics
> ---
>
> Key: YARN-5148
> URL: https://issues.apache.org/jira/browse/YARN-5148
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: webapp, yarn-ui-v2
>Reporter: Wangda Tan
>Assignee: Kai Sasaki
>  Labels: oct16-medium
> Attachments: pretty-json-metrics.png, Screen Shot 2016-09-11 at 
> 23.28.31.png, Screen Shot 2016-09-13 at 22.27.00.png, 
> UsingStringifyPrint.png, YARN-5148.07.patch, YARN-5148.08.patch, 
> YARN-5148.09.patch, YARN-5148.10.patch, YARN-5148-YARN-3368.01.patch, 
> YARN-5148-YARN-3368.02.patch, YARN-5148-YARN-3368.03.patch, 
> YARN-5148-YARN-3368.04.patch, YARN-5148-YARN-3368.05.patch, 
> YARN-5148-YARN-3368.06.patch, yarn-conf.png, yarn-tools.png
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5956) Refactor ClientRMService

2017-03-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15892405#comment-15892405
 ] 

Hadoop QA commented on YARN-5956:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 22s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 8 new + 57 unchanged - 5 fixed = 65 total (was 62) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
6s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 generated 4 new + 0 unchanged - 0 fixed = 4 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 40m 
34s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 61m 51s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | 
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
|  |  Redundant nullcheck of application, which is known to be non-null in 
org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.getApplicationAttemptReport(GetApplicationAttemptReportRequest)
  Redundant null check at ClientRMService.java:is known to be non-null in 
org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.getApplicationAttemptReport(GetApplicationAttemptReportRequest)
  Redundant null check at ClientRMService.java:[line 390] |
|  |  Redundant nullcheck of application, which is known to be non-null in 
org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.getApplicationAttempts(GetApplicationAttemptsRequest)
  Redundant null check at ClientRMService.java:is known to be non-null in 
org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.getApplicationAttempts(GetApplicationAttemptsRequest)
  Redundant null check at ClientRMService.java:[line 427] |
|  |  Redundant nullcheck of application, which is known to be non-null in 
org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.getContainerReport(GetContainerReportRequest)
  Redundant null 

[jira] [Commented] (YARN-5148) [YARN-3368] Add page to new YARN UI to view server side configurations/logs/JVM-metrics

2017-03-02 Thread Kai Sasaki (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15892388#comment-15892388
 ] 

Kai Sasaki commented on YARN-5148:
--

[~sunilg] Sorry for late checking. I modified DOM class a little but anyway it 
can show metrics json in pretty style. Thank you so much!
Could you take a look again when you get a chance?

> [YARN-3368] Add page to new YARN UI to view server side 
> configurations/logs/JVM-metrics
> ---
>
> Key: YARN-5148
> URL: https://issues.apache.org/jira/browse/YARN-5148
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: webapp, yarn-ui-v2
>Reporter: Wangda Tan
>Assignee: Kai Sasaki
>  Labels: oct16-medium
> Attachments: pretty-json-metrics.png, Screen Shot 2016-09-11 at 
> 23.28.31.png, Screen Shot 2016-09-13 at 22.27.00.png, 
> UsingStringifyPrint.png, YARN-5148.07.patch, YARN-5148.08.patch, 
> YARN-5148.09.patch, YARN-5148.10.patch, YARN-5148-YARN-3368.01.patch, 
> YARN-5148-YARN-3368.02.patch, YARN-5148-YARN-3368.03.patch, 
> YARN-5148-YARN-3368.04.patch, YARN-5148-YARN-3368.05.patch, 
> YARN-5148-YARN-3368.06.patch, yarn-conf.png, yarn-tools.png
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5148) [YARN-3368] Add page to new YARN UI to view server side configurations/logs/JVM-metrics

2017-03-02 Thread Kai Sasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5148?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Sasaki updated YARN-5148:
-
Attachment: YARN-5148.10.patch

> [YARN-3368] Add page to new YARN UI to view server side 
> configurations/logs/JVM-metrics
> ---
>
> Key: YARN-5148
> URL: https://issues.apache.org/jira/browse/YARN-5148
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: webapp, yarn-ui-v2
>Reporter: Wangda Tan
>Assignee: Kai Sasaki
>  Labels: oct16-medium
> Attachments: pretty-json-metrics.png, Screen Shot 2016-09-11 at 
> 23.28.31.png, Screen Shot 2016-09-13 at 22.27.00.png, 
> UsingStringifyPrint.png, YARN-5148.07.patch, YARN-5148.08.patch, 
> YARN-5148.09.patch, YARN-5148.10.patch, YARN-5148-YARN-3368.01.patch, 
> YARN-5148-YARN-3368.02.patch, YARN-5148-YARN-3368.03.patch, 
> YARN-5148-YARN-3368.04.patch, YARN-5148-YARN-3368.05.patch, 
> YARN-5148-YARN-3368.06.patch, yarn-conf.png, yarn-tools.png
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5148) [YARN-3368] Add page to new YARN UI to view server side configurations/logs/JVM-metrics

2017-03-02 Thread Kai Sasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5148?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Sasaki updated YARN-5148:
-
Attachment: pretty-json-metrics.png

> [YARN-3368] Add page to new YARN UI to view server side 
> configurations/logs/JVM-metrics
> ---
>
> Key: YARN-5148
> URL: https://issues.apache.org/jira/browse/YARN-5148
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: webapp, yarn-ui-v2
>Reporter: Wangda Tan
>Assignee: Kai Sasaki
>  Labels: oct16-medium
> Attachments: pretty-json-metrics.png, Screen Shot 2016-09-11 at 
> 23.28.31.png, Screen Shot 2016-09-13 at 22.27.00.png, 
> UsingStringifyPrint.png, YARN-5148.07.patch, YARN-5148.08.patch, 
> YARN-5148.09.patch, YARN-5148.10.patch, YARN-5148-YARN-3368.01.patch, 
> YARN-5148-YARN-3368.02.patch, YARN-5148-YARN-3368.03.patch, 
> YARN-5148-YARN-3368.04.patch, YARN-5148-YARN-3368.05.patch, 
> YARN-5148-YARN-3368.06.patch, yarn-conf.png, yarn-tools.png
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6256) Add FROM_ID info key for timeline entities in reader response.

2017-03-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15892379#comment-15892379
 ] 

Hadoop QA commented on YARN-6256:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
14s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
36s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
31s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
40s{color} | {color:green} YARN-5355 passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests
 {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
28s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase
 in YARN-5355 has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
36s{color} | {color:green} YARN-5355 passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
31s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 29s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server: The patch generated 11 new 
+ 31 unchanged - 7 fixed = 42 total (was 38) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests
 {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
43s{color} | {color:green} hadoop-yarn-server-timelineservice in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
18s{color} | {color:green} hadoop-yarn-server-timelineservice-hbase in the 
patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  4m 43s{color} 
| {color:red} hadoop-yarn-server-timelineservice-hbase-tests in the patch 
failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 30m 59s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorageEntities |
|   | hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorageApps |
\\
\\
|| Subsystem || 

[jira] [Updated] (YARN-6256) Add FROM_ID info key for timeline entities in reader response.

2017-03-02 Thread Rohith Sharma K S (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6256?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith Sharma K S updated YARN-6256:

Attachment: YARN-6256-YARN-5355.0001.patch

Updated the patch with following changes similar to YARN-6027
# Followed same approach for FlowRunKey, ApplicationRowKey and EntityRowKey. 
# Removed fromIdPrefix filter which is no more required since we are providing 
FROM_ID info in entity response. 
# Updated java doc from fromId in TimelineReaderWebServices. 

cc :/ [~varun_saxena] [~sjlee0]

> Add FROM_ID info key for timeline entities in reader response. 
> ---
>
> Key: YARN-6256
> URL: https://issues.apache.org/jira/browse/YARN-6256
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>  Labels: yarn-5355-merge-blocker
> Attachments: YARN-6256-YARN-5355.0001.patch
>
>
> It is continuation with YARN-6027 to add FROM_ID key in all other timeline 
> entity responses which includes
> # Flow run entity response. 
> # Application entity response
> # Generic timeline entity response - Here we need to retrospect on idprefix 
> filter which is now separately provided. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-6249) TestFairSchedulerPreemption is inconsistently failing on trunk

2017-03-02 Thread Tao Jie (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15892327#comment-15892327
 ] 

Tao Jie edited comment on YARN-6249 at 3/2/17 2:32 PM:
---

In the attached patch, I called update() explicitly between app1 is allocated 
and app2 is submitted, to ensure  {{minShareStarvation}} of 
root.preemptable.child-2 is refreshed.
[~yufeigu] [~kasha],  would you take a look at it? I ran 300 times of this case 
no failure, and 3 of 100 runs would failed without this patch. 


was (Author: tao jie):
In the attached patch, I called update() explicitly between app1 is allocated 
and before app2 is submitted, to ensure  {{minShareStarvation}} of 
root.preemptable.child-2 is refreshed.
[~yufeigu] [~kasha],  would you take a look at it? I ran 300 times of this case 
no failure, and 3 of 100 runs would failed without this patch. 

> TestFairSchedulerPreemption is inconsistently failing on trunk
> --
>
> Key: YARN-6249
> URL: https://issues.apache.org/jira/browse/YARN-6249
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler, resourcemanager
>Affects Versions: 2.9.0
>Reporter: Sean Po
>Assignee: Yufei Gu
> Attachments: YARN-6249.001.patch
>
>
> Tests in TestFairSchedulerPreemption.java will inconsistently fail on trunk. 
> An example stack trace: 
> {noformat}
> Tests run: 24, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 24.879 sec 
> <<< FAILURE! - in 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption
> testPreemptionSelectNonAMContainer[MinSharePreemptionWithDRF](org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption)
>   Time elapsed: 10.475 sec  <<< FAILURE!
> java.lang.AssertionError: Incorrect number of containers on the greedy app 
> expected:<4> but was:<8>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption.verifyPreemption(TestFairSchedulerPreemption.java:288)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption.testPreemptionSelectNonAMContainer(TestFairSchedulerPreemption.java:363)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6249) TestFairSchedulerPreemption is inconsistently failing on trunk

2017-03-02 Thread Tao Jie (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15892327#comment-15892327
 ] 

Tao Jie commented on YARN-6249:
---

In the attached patch, I called update() explicitly between app1 is allocated 
and before app2 is submitted, to ensure  {{minShareStarvation}} of 
root.preemptable.child-2 is refreshed.
[~yufeigu] [~kasha],  would you take a look at it? I ran 300 times of this case 
no failure, and 3 of 100 runs would failed without this patch. 

> TestFairSchedulerPreemption is inconsistently failing on trunk
> --
>
> Key: YARN-6249
> URL: https://issues.apache.org/jira/browse/YARN-6249
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler, resourcemanager
>Affects Versions: 2.9.0
>Reporter: Sean Po
>Assignee: Yufei Gu
> Attachments: YARN-6249.001.patch
>
>
> Tests in TestFairSchedulerPreemption.java will inconsistently fail on trunk. 
> An example stack trace: 
> {noformat}
> Tests run: 24, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 24.879 sec 
> <<< FAILURE! - in 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption
> testPreemptionSelectNonAMContainer[MinSharePreemptionWithDRF](org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption)
>   Time elapsed: 10.475 sec  <<< FAILURE!
> java.lang.AssertionError: Incorrect number of containers on the greedy app 
> expected:<4> but was:<8>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption.verifyPreemption(TestFairSchedulerPreemption.java:288)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption.testPreemptionSelectNonAMContainer(TestFairSchedulerPreemption.java:363)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6249) TestFairSchedulerPreemption is inconsistently failing on trunk

2017-03-02 Thread Tao Jie (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6249?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tao Jie updated YARN-6249:
--
Attachment: YARN-6249.001.patch

> TestFairSchedulerPreemption is inconsistently failing on trunk
> --
>
> Key: YARN-6249
> URL: https://issues.apache.org/jira/browse/YARN-6249
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler, resourcemanager
>Affects Versions: 2.9.0
>Reporter: Sean Po
>Assignee: Yufei Gu
> Attachments: YARN-6249.001.patch
>
>
> Tests in TestFairSchedulerPreemption.java will inconsistently fail on trunk. 
> An example stack trace: 
> {noformat}
> Tests run: 24, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 24.879 sec 
> <<< FAILURE! - in 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption
> testPreemptionSelectNonAMContainer[MinSharePreemptionWithDRF](org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption)
>   Time elapsed: 10.475 sec  <<< FAILURE!
> java.lang.AssertionError: Incorrect number of containers on the greedy app 
> expected:<4> but was:<8>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption.verifyPreemption(TestFairSchedulerPreemption.java:288)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption.testPreemptionSelectNonAMContainer(TestFairSchedulerPreemption.java:363)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6249) TestFairSchedulerPreemption is inconsistently failing on trunk

2017-03-02 Thread Tao Jie (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15892305#comment-15892305
 ] 

Tao Jie commented on YARN-6249:
---

I debugged this test and detected the root cause of the failure.
In the test, FsLeafQueues are initialized before {{scheduler.setClock(clock)}} 
is called in setup(). As a result, {{lastTimeAtMinShare}} in FsLeafQueue is 
initialized to the long value of current time(a big number), and it will 
compare to the time of {{ControlledClock}} which starts from 0.
In {{FsLeafQueue#minShareStarvation}} invoked in update()
{code}
long now = scheduler.getClock().getTime();
if (!starved) {
  // Record that the queue is not starved
  setLastTimeAtMinShare(now);
}

if (now - lastTimeAtMinShare < getMinSharePreemptionTimeout()) {
  // the queue is not starved for the preemption timeout
  starvation = Resources.clone(Resources.none());
}
{code}
If {{starved}} is true here at the first time this method is called, this queue 
would never satisfy the min preemption timeout.
However I don't think it is a bug in the real world, because this issue is 
related to ControlledClock only used in test. 


> TestFairSchedulerPreemption is inconsistently failing on trunk
> --
>
> Key: YARN-6249
> URL: https://issues.apache.org/jira/browse/YARN-6249
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler, resourcemanager
>Affects Versions: 2.9.0
>Reporter: Sean Po
>Assignee: Yufei Gu
>
> Tests in TestFairSchedulerPreemption.java will inconsistently fail on trunk. 
> An example stack trace: 
> {noformat}
> Tests run: 24, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 24.879 sec 
> <<< FAILURE! - in 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption
> testPreemptionSelectNonAMContainer[MinSharePreemptionWithDRF](org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption)
>   Time elapsed: 10.475 sec  <<< FAILURE!
> java.lang.AssertionError: Incorrect number of containers on the greedy app 
> expected:<4> but was:<8>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption.verifyPreemption(TestFairSchedulerPreemption.java:288)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption.testPreemptionSelectNonAMContainer(TestFairSchedulerPreemption.java:363)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5956) Refactor ClientRMService

2017-03-02 Thread Kai Sasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5956?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Sasaki updated YARN-5956:
-
Attachment: YARN-5956.11.patch

> Refactor ClientRMService
> 
>
> Key: YARN-5956
> URL: https://issues.apache.org/jira/browse/YARN-5956
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Affects Versions: 3.0.0-alpha2
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
>Priority: Minor
> Attachments: YARN-5956.01.patch, YARN-5956.02.patch, 
> YARN-5956.03.patch, YARN-5956.04.patch, YARN-5956.05.patch, 
> YARN-5956.06.patch, YARN-5956.07.patch, YARN-5956.08.patch, 
> YARN-5956.09.patch, YARN-5956.10.patch, YARN-5956.11.patch
>
>
> Some refactoring can be done in {{ClientRMService}}.
> - Remove redundant variable declaration
> - Fill in missing javadocs
> - Proper variable access modifier
> - Fix some typos in method name and exception messages



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6268) Container with extra data

2017-03-02 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15892251#comment-15892251
 ] 

Rohith Sharma K S commented on YARN-6268:
-

Could you elaborate more with use case example like what user extra date want 
to take?

> Container with extra data
> -
>
> Key: YARN-6268
> URL: https://issues.apache.org/jira/browse/YARN-6268
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: api
>Affects Versions: 3.0.0-alpha2
>Reporter: 冯健
>
> implement a container which can take extra data (eg: some data user define).  
> so user can do some operation with that



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6267) Can't create directory xxx/application_1488445897886_0001 - Permission denied

2017-03-02 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6267?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15892240#comment-15892240
 ] 

Rohith Sharma K S commented on YARN-6267:
-

[~cjn082030] I appreciate your interest on using Hadoop. From exception log, it 
appears you have misconfigured or missed few steps for running 
LinuxContainerExecutor. Could you check [YARN Secure 
Containers|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/SecureMode.html#LinuxContainerExecutor]
 for proper configurations. 

And please use [Hadoop User Mailing 
List|https://hadoop.apache.org/mailing_lists.html] for questions if you 
encounter any issue while configuring LCE. JIRAs are used for tracking 
development activities or for bug report purpose.

> Can't create directory xxx/application_1488445897886_0001 - Permission denied
> -
>
> Key: YARN-6267
> URL: https://issues.apache.org/jira/browse/YARN-6267
> Project: Hadoop YARN
>  Issue Type: Test
>  Components: yarn
> Environment: yarn-site.xml:
> yarn.nodemanager.container-executor.class = 
> org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor,
>Reporter: cjn082030
>
> Sorry I get a problem about yarn,could anyone can help?
> I run wordcount(MapReduce job).The MapReduce job is FAILED.
> Diagnostics Info:
> Application application_1488445897886_0001 failed 2 times due to AM Container 
> for appattempt_1488445897886_0001_02 exited with exitCode: -1000
> For more detailed output, check application tracking 
> page:http://test:8088/cluster/app/application_1488445897886_0001Then, click 
> on links to logs of each attempt.
> Diagnostics: Application application_1488445897886_0001 initialization failed 
> (exitCode=255) with output: main : command provided 0
> main : user is nobody
> main : requested yarn user is wang1
> Can't create directory 
> /cjntest/tmp/yarn/local-dirs/usercache/wang1/appcache/application_1488445897886_0001
>  - Permission denied
> Did not create any app directories
> Failing this attempt. Failing the application.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Resolved] (YARN-6267) Can't create directory xxx/application_1488445897886_0001 - Permission denied

2017-03-02 Thread Rohith Sharma K S (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6267?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith Sharma K S resolved YARN-6267.
-
Resolution: Invalid

> Can't create directory xxx/application_1488445897886_0001 - Permission denied
> -
>
> Key: YARN-6267
> URL: https://issues.apache.org/jira/browse/YARN-6267
> Project: Hadoop YARN
>  Issue Type: Test
>  Components: yarn
> Environment: yarn-site.xml:
> yarn.nodemanager.container-executor.class = 
> org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor,
>Reporter: cjn082030
>
> Sorry I get a problem about yarn,could anyone can help?
> I run wordcount(MapReduce job).The MapReduce job is FAILED.
> Diagnostics Info:
> Application application_1488445897886_0001 failed 2 times due to AM Container 
> for appattempt_1488445897886_0001_02 exited with exitCode: -1000
> For more detailed output, check application tracking 
> page:http://test:8088/cluster/app/application_1488445897886_0001Then, click 
> on links to logs of each attempt.
> Diagnostics: Application application_1488445897886_0001 initialization failed 
> (exitCode=255) with output: main : command provided 0
> main : user is nobody
> main : requested yarn user is wang1
> Can't create directory 
> /cjntest/tmp/yarn/local-dirs/usercache/wang1/appcache/application_1488445897886_0001
>  - Permission denied
> Did not create any app directories
> Failing this attempt. Failing the application.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5147) [YARN-3368] Showing JMX metrics for YARN servers on new YARN UI

2017-03-02 Thread JIRA

[ 
https://issues.apache.org/jira/browse/YARN-5147?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15892191#comment-15892191
 ] 

Gergely Novák commented on YARN-5147:
-

Can someone please add more details for this Jira? Which JMX metrics would you 
be interested in viewing in the new UI? Or all you need is a link for the Old 
UI?

> [YARN-3368] Showing JMX metrics for YARN servers on new YARN UI
> ---
>
> Key: YARN-5147
> URL: https://issues.apache.org/jira/browse/YARN-5147
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Sreenath Somarajapuram
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6153) keepContainer does not work when AM retry window is set

2017-03-02 Thread kyungwan nam (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6153?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

kyungwan nam updated YARN-6153:
---
Attachment: YARN-6153.006-1.patch

I'm uploading the additional patch for the hadoop-trunk. (YARN-6153.006-1.patch)
above 1. problem has been fixed in the same way as the branch-2.8.



> keepContainer does not work when AM retry window is set
> ---
>
> Key: YARN-6153
> URL: https://issues.apache.org/jira/browse/YARN-6153
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.7.1
>Reporter: kyungwan nam
>Assignee: kyungwan nam
> Fix For: 2.8.0, 3.0.0-alpha3
>
> Attachments: YARN-6153.001.patch, YARN-6153.002.patch, 
> YARN-6153.003.patch, YARN-6153.004.patch, YARN-6153.005.patch, 
> YARN-6153.006-1.patch, YARN-6153.006.patch, YARN-6153-branch-2.8.patch
>
>
> yarn.resourcemanager.am.max-attempts has been configured to 2 in my cluster.
> I submitted a YARN application (slider app) that keepContainers=true, 
> attemptFailuresValidityInterval=30.
> it did work properly when AM was failed firstly.
> all containers launched by previous AM were resynced with new AM (attempt2) 
> without killing containers.
> after 10 minutes, I thought AM failure count was reset by 
> attemptFailuresValidityInterval (5 minutes).
> but, all containers were killed when AM was failed secondly. (new AM attempt3 
> was launched properly)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5496) Make Node Heatmap Chart categories clickable

2017-03-02 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15892158#comment-15892158
 ] 

Sunil G commented on YARN-5496:
---

Cool. Its very nice..

I will also check code and will update you if any issues.

> Make Node Heatmap Chart categories clickable
> 
>
> Key: YARN-5496
> URL: https://issues.apache.org/jira/browse/YARN-5496
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Yesha Vora
>Assignee: Gergely Novák
> Attachments: YARN-5496.001.patch, YARN-5496.002.patch, 
> YARN-5496.003.patch, YARN-5496.004.patch
>
>
> Make Node Heatmap Chart categories clickable. 
> This Heatmap chart has few categories like 10% used, 30% used etc.
> This tags should be clickable. If user clicks on 10% used tag, it should show 
> hosts with 10% usage.  This can be a useful feature for clusters having 1000s 
> of nodes.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-5496) Make Node Heatmap Chart categories clickable

2017-03-02 Thread JIRA

[ 
https://issues.apache.org/jira/browse/YARN-5496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15892153#comment-15892153
 ] 

Gergely Novák edited comment on YARN-5496 at 3/2/17 12:31 PM:
--

[~sunilg] nice catch! Sorry for missing this.. fixed in patch #4.


was (Author: gergelynovak):
[~sunilg] nice catch! Sorry for missing this..

> Make Node Heatmap Chart categories clickable
> 
>
> Key: YARN-5496
> URL: https://issues.apache.org/jira/browse/YARN-5496
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Yesha Vora
>Assignee: Gergely Novák
> Attachments: YARN-5496.001.patch, YARN-5496.002.patch, 
> YARN-5496.003.patch, YARN-5496.004.patch
>
>
> Make Node Heatmap Chart categories clickable. 
> This Heatmap chart has few categories like 10% used, 30% used etc.
> This tags should be clickable. If user clicks on 10% used tag, it should show 
> hosts with 10% usage.  This can be a useful feature for clusters having 1000s 
> of nodes.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5496) Make Node Heatmap Chart categories clickable

2017-03-02 Thread JIRA

[ 
https://issues.apache.org/jira/browse/YARN-5496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15892153#comment-15892153
 ] 

Gergely Novák commented on YARN-5496:
-

[~sunilg] nice catch! Sorry for missing this..

> Make Node Heatmap Chart categories clickable
> 
>
> Key: YARN-5496
> URL: https://issues.apache.org/jira/browse/YARN-5496
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Yesha Vora
>Assignee: Gergely Novák
> Attachments: YARN-5496.001.patch, YARN-5496.002.patch, 
> YARN-5496.003.patch, YARN-5496.004.patch
>
>
> Make Node Heatmap Chart categories clickable. 
> This Heatmap chart has few categories like 10% used, 30% used etc.
> This tags should be clickable. If user clicks on 10% used tag, it should show 
> hosts with 10% usage.  This can be a useful feature for clusters having 1000s 
> of nodes.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5496) Make Node Heatmap Chart categories clickable

2017-03-02 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/YARN-5496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gergely Novák updated YARN-5496:

Attachment: YARN-5496.004.patch

> Make Node Heatmap Chart categories clickable
> 
>
> Key: YARN-5496
> URL: https://issues.apache.org/jira/browse/YARN-5496
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Yesha Vora
>Assignee: Gergely Novák
> Attachments: YARN-5496.001.patch, YARN-5496.002.patch, 
> YARN-5496.003.patch, YARN-5496.004.patch
>
>
> Make Node Heatmap Chart categories clickable. 
> This Heatmap chart has few categories like 10% used, 30% used etc.
> This tags should be clickable. If user clicks on 10% used tag, it should show 
> hosts with 10% usage.  This can be a useful feature for clusters having 1000s 
> of nodes.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5151) [YARN-3368] Support kill application from new YARN UI

2017-03-02 Thread JIRA

[ 
https://issues.apache.org/jira/browse/YARN-5151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15892142#comment-15892142
 ] 

Gergely Novák commented on YARN-5151:
-

For now I put the "Kill application" button to the Information page, under 
Basic Info. Patch #2 contains all my previous TODOs: a modal conformation 
dialog, page refresh on success and basic error handling.

> [YARN-3368] Support kill application from new YARN UI
> -
>
> Key: YARN-5151
> URL: https://issues.apache.org/jira/browse/YARN-5151
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Gergely Novák
> Attachments: YARN-5151.001.patch, YARN-5151.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5151) [YARN-3368] Support kill application from new YARN UI

2017-03-02 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/YARN-5151?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gergely Novák updated YARN-5151:

Attachment: YARN-5151.002.patch

> [YARN-3368] Support kill application from new YARN UI
> -
>
> Key: YARN-5151
> URL: https://issues.apache.org/jira/browse/YARN-5151
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Gergely Novák
> Attachments: YARN-5151.001.patch, YARN-5151.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6268) Container with extra data

2017-03-02 Thread JIRA
冯健 created YARN-6268:


 Summary: Container with extra data
 Key: YARN-6268
 URL: https://issues.apache.org/jira/browse/YARN-6268
 Project: Hadoop YARN
  Issue Type: New Feature
  Components: api
Affects Versions: 3.0.0-alpha2
Reporter: 冯健


implement a container which can take extra data (eg: some data user define).  
so user can do some operation with that



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6153) keepContainer does not work when AM retry window is set

2017-03-02 Thread kyungwan nam (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15892129#comment-15892129
 ] 

kyungwan nam commented on YARN-6153:


why I didn’t face the problem above 1 problem in the hadoop-trunk?
It is not intended, but there is already Thread.sleep code to sleep 15 seconds 
in the hadoop-trunk.

{code}
//Wait to make sure attempt1 be removed in State Store
//TODO explore a better way than sleeping for a while (YARN-4929)
Thread.sleep(15 * 1000);
{code}

> keepContainer does not work when AM retry window is set
> ---
>
> Key: YARN-6153
> URL: https://issues.apache.org/jira/browse/YARN-6153
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.7.1
>Reporter: kyungwan nam
>Assignee: kyungwan nam
> Fix For: 2.8.0, 3.0.0-alpha3
>
> Attachments: YARN-6153.001.patch, YARN-6153.002.patch, 
> YARN-6153.003.patch, YARN-6153.004.patch, YARN-6153.005.patch, 
> YARN-6153.006.patch, YARN-6153-branch-2.8.patch
>
>
> yarn.resourcemanager.am.max-attempts has been configured to 2 in my cluster.
> I submitted a YARN application (slider app) that keepContainers=true, 
> attemptFailuresValidityInterval=30.
> it did work properly when AM was failed firstly.
> all containers launched by previous AM were resynced with new AM (attempt2) 
> without killing containers.
> after 10 minutes, I thought AM failure count was reset by 
> attemptFailuresValidityInterval (5 minutes).
> but, all containers were killed when AM was failed secondly. (new AM attempt3 
> was launched properly)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6153) keepContainer does not work when AM retry window is set

2017-03-02 Thread kyungwan nam (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6153?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

kyungwan nam updated YARN-6153:
---
Attachment: YARN-6153-branch-2.8.patch

I'm uploading the patch for branch-2.8.

1. in the testRMAppAttemptFailuresValidityInterval, using the systemClock has 
been replaced with Thread.sleep.

by following, the time to check the validity interval is no longer the 
systemClock in RMAppImpl.

{code}
-  private int getNumFailedAppAttempts() {
+  public int getNumFailedAppAttempts() {
 int completedAttempts = 0;
-long endTime = this.systemClock.getTime();
 // Do not count AM preemption, hardware failures or NM resync
 // as attempt failure.
 for (RMAppAttempt attempt : attempts.values()) {
   if (attempt.shouldCountTowardsMaxAttemptRetry()) {
-if (this.attemptFailuresValidityInterval <= 0
-|| (attempt.getFinishTime() > endTime
-- this.attemptFailuresValidityInterval)) {
-  completedAttempts++;
-}
+completedAttempts++;
   }
 }
{code}

2. in the testAMRestartNotLostContainerAfterAttemptFailuresValidityInterval, 
the timeout value has been increased to 40 seconds.

currently, YARN-4807 is not yet included in the branch-2.8. I think that’s why 
the timeout happens.


> keepContainer does not work when AM retry window is set
> ---
>
> Key: YARN-6153
> URL: https://issues.apache.org/jira/browse/YARN-6153
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.7.1
>Reporter: kyungwan nam
>Assignee: kyungwan nam
> Fix For: 2.8.0, 3.0.0-alpha3
>
> Attachments: YARN-6153.001.patch, YARN-6153.002.patch, 
> YARN-6153.003.patch, YARN-6153.004.patch, YARN-6153.005.patch, 
> YARN-6153.006.patch, YARN-6153-branch-2.8.patch
>
>
> yarn.resourcemanager.am.max-attempts has been configured to 2 in my cluster.
> I submitted a YARN application (slider app) that keepContainers=true, 
> attemptFailuresValidityInterval=30.
> it did work properly when AM was failed firstly.
> all containers launched by previous AM were resynced with new AM (attempt2) 
> without killing containers.
> after 10 minutes, I thought AM failure count was reset by 
> attemptFailuresValidityInterval (5 minutes).
> but, all containers were killed when AM was failed secondly. (new AM attempt3 
> was launched properly)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



  1   2   >