[jira] [Commented] (YARN-4496) Improve HA ResourceManager Failover detection on the client

2016-01-07 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15088332#comment-15088332
 ] 

Arun Suresh commented on YARN-4496:
---

Hey [~jianhe], Feel free to take it up.. I can help with the reviews..

> Improve HA ResourceManager Failover detection on the client
> ---
>
> Key: YARN-4496
> URL: https://issues.apache.org/jira/browse/YARN-4496
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: client, resourcemanager
>Reporter: Arun Suresh
>Assignee: Arun Suresh
>
> HDFS deployments can currently use the {{RequestHedgingProxyProvider}} to 
> improve Namenode failover detection in the client. It does this by 
> concurrently trying all namenodes and picks the namenode that returns the 
> fastest with a successful response as the active node.
> It would be useful to have a similar ProxyProvider for the Yarn RM (it can 
> possibly be done by converging some the class hierarchies to use the same 
> ProxyProvider)
> This would especially be useful for large YARN deployments with multiple 
> standby RMs where clients will be able to pick the active RM without having 
> to traverse a list of configured RMs. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (YARN-4559) Make leader elector and zk store share the same curator client

2016-01-07 Thread Jian He (JIRA)
Jian He created YARN-4559:
-

 Summary: Make leader elector and zk store share the same curator 
client
 Key: YARN-4559
 URL: https://issues.apache.org/jira/browse/YARN-4559
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Jian He
Assignee: Jian He






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-4560) Make scheduler error checking message more user friendly

2016-01-07 Thread Ray Chiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ray Chiang updated YARN-4560:
-
Summary: Make scheduler error checking message more user friendly  (was: 
Make )

> Make scheduler error checking message more user friendly
> 
>
> Key: YARN-4560
> URL: https://issues.apache.org/jira/browse/YARN-4560
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Affects Versions: 2.8.0
>Reporter: Ray Chiang
>Assignee: Ray Chiang
>Priority: Trivial
>
> If the YARN properties below are poorly configured:
> {code}
> yarn.scheduler.minimum-allocation-mb
> yarn.scheduler.maximum-allocation-mb
> {code}
> The error message that shows up in the RM is:
> {panel}
> 2016-01-07 14:47:03,711 FATAL 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error starting 
> ResourceManager
> org.apache.hadoop.yarn.exceptions.YarnRuntimeException: Invalid resource 
> scheduler memory allocation configuration, 
> yarn.scheduler.minimum-allocation-mb=-1, 
> yarn.scheduler.maximum-allocation-mb=-3, min should equal greater than 0, max 
> should be no smaller than min.
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.validateConf(FairScheduler.java:215)
> {panel}
> While it's technically correct, it's not very user friendly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-4560) Make scheduler error checking message more user friendly

2016-01-07 Thread Ray Chiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ray Chiang updated YARN-4560:
-
Attachment: YARN-4560.001.patch

Initial version.

> Make scheduler error checking message more user friendly
> 
>
> Key: YARN-4560
> URL: https://issues.apache.org/jira/browse/YARN-4560
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Affects Versions: 2.8.0
>Reporter: Ray Chiang
>Assignee: Ray Chiang
>Priority: Trivial
> Attachments: YARN-4560.001.patch
>
>
> If the YARN properties below are poorly configured:
> {code}
> yarn.scheduler.minimum-allocation-mb
> yarn.scheduler.maximum-allocation-mb
> {code}
> The error message that shows up in the RM is:
> {panel}
> 2016-01-07 14:47:03,711 FATAL 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error starting 
> ResourceManager
> org.apache.hadoop.yarn.exceptions.YarnRuntimeException: Invalid resource 
> scheduler memory allocation configuration, 
> yarn.scheduler.minimum-allocation-mb=-1, 
> yarn.scheduler.maximum-allocation-mb=-3, min should equal greater than 0, max 
> should be no smaller than min.
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.validateConf(FairScheduler.java:215)
> {panel}
> While it's technically correct, it's not very user friendly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4200) Refactor reader classes in storage to nest under hbase specific package name

2016-01-07 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15088391#comment-15088391
 ] 

Vrushali C commented on YARN-4200:
--

+ 1, thanks [~gtCarrera9]

> Refactor reader classes in storage to nest under hbase specific package name
> 
>
> Key: YARN-4200
> URL: https://issues.apache.org/jira/browse/YARN-4200
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Li Lu
>Priority: Minor
>  Labels: yarn-2928-1st-milestone
> Attachments: YARN-4200-YARN-2928.001.patch, 
> YARN-4200-feature-YARN-2928.002.patch, YARN-4200-feature-YARN-2928.003.patch
>
>
> As suggested by [~gtCarrera9] in YARN-4074, filing jira to refactor the code 
> to group together the reader classes under a package in storage that 
> indicates these are hbase specific. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4180) AMLauncher does not retry on failures when talking to NM

2016-01-07 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15088392#comment-15088392
 ] 

Junping Du commented on YARN-4180:
--

Thanks [~kasha] for help on this.

> AMLauncher does not retry on failures when talking to NM 
> -
>
> Key: YARN-4180
> URL: https://issues.apache.org/jira/browse/YARN-4180
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.7.1
>Reporter: Anubhav Dhoot
>Assignee: Anubhav Dhoot
>Priority: Critical
> Fix For: 2.7.2, 2.6.4
>
> Attachments: YARN-4180-branch-2.7.2.txt, YARN-4180.001.patch, 
> YARN-4180.002.patch, YARN-4180.002.patch, YARN-4180.002.patch
>
>
> We see issues with RM trying to launch a container while a NM is restarting 
> and we get exceptions like NMNotReadyException. While YARN-3842 added retry 
> for other clients of NM (AMs mainly) its not used by AMLauncher in RM causing 
> there intermittent errors to cause job failures. This can manifest during 
> rolling restart of NMs. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-4553) Add cgroups support for docker containers

2016-01-07 Thread Sidharta Seethana (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4553?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sidharta Seethana updated YARN-4553:

Attachment: YARN-4553.003.patch

Uploaded a new patch based on review comments.

> Add cgroups support for docker containers
> -
>
> Key: YARN-4553
> URL: https://issues.apache.org/jira/browse/YARN-4553
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Sidharta Seethana
>Assignee: Sidharta Seethana
> Attachments: YARN-4553.001.patch, YARN-4553.002.patch, 
> YARN-4553.003.patch
>
>
> Currently, cgroups-based resource isolation does not work with docker 
> containers under YARN. The processes in these containers are launched by the 
> docker daemon and they are not children of a container-executor process. 
> Docker supports a --cgroup-parent flag which can be used to point to the 
> container-specific cgroups that are created by the nodemanager. This will 
> allow the Nodemanager to manage cgroups (as it does today) while allowing 
> resource isolation to work with docker containers. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (YARN-4560) Make

2016-01-07 Thread Ray Chiang (JIRA)
Ray Chiang created YARN-4560:


 Summary: Make 
 Key: YARN-4560
 URL: https://issues.apache.org/jira/browse/YARN-4560
 Project: Hadoop YARN
  Issue Type: Bug
  Components: yarn
Affects Versions: 2.8.0
Reporter: Ray Chiang
Assignee: Ray Chiang
Priority: Trivial


If the YARN properties below are poorly configured:

{code}
yarn.scheduler.minimum-allocation-mb
yarn.scheduler.maximum-allocation-mb
{code}

The error message that shows up in the RM is:

{panel}
2016-01-07 14:47:03,711 FATAL 
org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error starting 
ResourceManager
org.apache.hadoop.yarn.exceptions.YarnRuntimeException: Invalid resource 
scheduler memory allocation configuration, 
yarn.scheduler.minimum-allocation-mb=-1, 
yarn.scheduler.maximum-allocation-mb=-3, min should equal greater than 0, max 
should be no smaller than min.
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.validateConf(FairScheduler.java:215)
{panel}

While it's technically correct, it's not very user friendly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-4559) Make leader elector and zk store share the same curator client

2016-01-07 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4559?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-4559:
--
Description: After YARN-4438, we reuse the same curator client for leader 
elector and zk store

> Make leader elector and zk store share the same curator client
> --
>
> Key: YARN-4559
> URL: https://issues.apache.org/jira/browse/YARN-4559
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jian He
>Assignee: Jian He
> Attachments: YARN-4559.1.patch
>
>
> After YARN-4438, we reuse the same curator client for leader elector and zk 
> store



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-4559) Make leader elector and zk store share the same curator client

2016-01-07 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4559?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-4559:
--
Description: After YARN-4438, we can reuse the same curator client for 
leader elector and zk store  (was: After YARN-4438, we reuse the same curator 
client for leader elector and zk store)

> Make leader elector and zk store share the same curator client
> --
>
> Key: YARN-4559
> URL: https://issues.apache.org/jira/browse/YARN-4559
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jian He
>Assignee: Jian He
> Attachments: YARN-4559.1.patch
>
>
> After YARN-4438, we can reuse the same curator client for leader elector and 
> zk store



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-3102) Decommisioned Nodes not listed in Web UI

2016-01-07 Thread Kuhu Shukla (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3102?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kuhu Shukla updated YARN-3102:
--
Attachment: YARN-3102-v1.patch

This patch adds semi-dummy  entries to active RMNodes map on 
init() for all nodes in the exclude list. Such a node is then transitioned 
right away to DECOMMISSIONED state from NEW state. Also, to fix the metric 
discrepancy if such a node joins back, the AddNodeTransition is modified. A 
simple test case is included.

Another approach can instead add the functionality of making transitions on 
inactiveRMNodes as well. This would allow not adding such dummy nodes to active 
list in the first place and put it in the inactive list instead.

> Decommisioned Nodes not listed in Web UI
> 
>
> Key: YARN-3102
> URL: https://issues.apache.org/jira/browse/YARN-3102
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.6.0
> Environment: 2 Node Manager and 1 Resource Manager 
>Reporter: Bibin A Chundatt
>Assignee: Kuhu Shukla
>Priority: Minor
> Attachments: YARN-3102-v1.patch
>
>
> Configure yarn.resourcemanager.nodes.exclude-path in yarn-site.xml to 
> yarn.exlude file In RM1 machine
> Add Yarn.exclude with NM1 Host Name 
> Start the node as listed below NM1,NM2 Resource manager
> Now check Nodes decommisioned in /cluster/nodes
> Number of decommisioned node is listed as 1 but Table is empty in 
> /cluster/nodes/decommissioned (detail of Decommision node not shown)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-4560) Make scheduler error checking message more user friendly

2016-01-07 Thread Ray Chiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ray Chiang updated YARN-4560:
-
Labels: supportability  (was: )

> Make scheduler error checking message more user friendly
> 
>
> Key: YARN-4560
> URL: https://issues.apache.org/jira/browse/YARN-4560
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Affects Versions: 2.8.0
>Reporter: Ray Chiang
>Assignee: Ray Chiang
>Priority: Trivial
>  Labels: supportability
> Attachments: YARN-4560.001.patch
>
>
> If the YARN properties below are poorly configured:
> {code}
> yarn.scheduler.minimum-allocation-mb
> yarn.scheduler.maximum-allocation-mb
> {code}
> The error message that shows up in the RM is:
> {panel}
> 2016-01-07 14:47:03,711 FATAL 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error starting 
> ResourceManager
> org.apache.hadoop.yarn.exceptions.YarnRuntimeException: Invalid resource 
> scheduler memory allocation configuration, 
> yarn.scheduler.minimum-allocation-mb=-1, 
> yarn.scheduler.maximum-allocation-mb=-3, min should equal greater than 0, max 
> should be no smaller than min.
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.validateConf(FairScheduler.java:215)
> {panel}
> While it's technically correct, it's not very user friendly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4519) potential deadlock of CapacityScheduler between decrease container and assign containers

2016-01-07 Thread MENG DING (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15088361#comment-15088361
 ] 

MENG DING commented on YARN-4519:
-

Please ignore the previous patch. I think there is way for improvement.

> potential deadlock of CapacityScheduler between decrease container and assign 
> containers
> 
>
> Key: YARN-4519
> URL: https://issues.apache.org/jira/browse/YARN-4519
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacityscheduler
>Reporter: sandflee
>Assignee: MENG DING
> Attachments: YARN-4519.1.patch
>
>
> In CapacityScheduler.allocate() , first get FiCaSchedulerApp sync lock, and 
> may be get CapacityScheduler's sync lock in decreaseContainer()
> In scheduler thread,  first get CapacityScheduler's sync lock in 
> allocateContainersToNode(), and may get FiCaSchedulerApp sync lock in 
> FicaSchedulerApp.assignContainers(). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4200) Refactor reader classes in storage to nest under hbase specific package name

2016-01-07 Thread Li Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15088396#comment-15088396
 ] 

Li Lu commented on YARN-4200:
-

If there are no further comments I'll commit this patch tomorrow morning. 
Thanks! 

> Refactor reader classes in storage to nest under hbase specific package name
> 
>
> Key: YARN-4200
> URL: https://issues.apache.org/jira/browse/YARN-4200
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Li Lu
>Priority: Minor
>  Labels: yarn-2928-1st-milestone
> Attachments: YARN-4200-YARN-2928.001.patch, 
> YARN-4200-feature-YARN-2928.002.patch, YARN-4200-feature-YARN-2928.003.patch
>
>
> As suggested by [~gtCarrera9] in YARN-4074, filing jira to refactor the code 
> to group together the reader classes under a package in storage that 
> indicates these are hbase specific. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4062) Add the flush and compaction functionality via coprocessors and scanners for flow run table

2016-01-07 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15088472#comment-15088472
 ] 

Vrushali C commented on YARN-4062:
--

Thanks [~sjlee0], I will go over these and make the relevant changes. 

> Add the flush and compaction functionality via coprocessors and scanners for 
> flow run table
> ---
>
> Key: YARN-4062
> URL: https://issues.apache.org/jira/browse/YARN-4062
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Vrushali C
>  Labels: yarn-2928-1st-milestone
> Attachments: YARN-4062-YARN-2928.1.patch, 
> YARN-4062-feature-YARN-2928.01.patch
>
>
> As part of YARN-3901, coprocessor and scanner is being added for storing into 
> the flow_run table. It also needs a flush & compaction processing in the 
> coprocessor and perhaps a new scanner to deal with the data during flushing 
> and compaction stages. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4200) Refactor reader classes in storage to nest under hbase specific package name

2016-01-07 Thread Li Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15088358#comment-15088358
 ] 

Li Lu commented on YARN-4200:
-

Any other concerns on this patch? All check style issues appear to be 
orthogonal to this change. Thanks. 

> Refactor reader classes in storage to nest under hbase specific package name
> 
>
> Key: YARN-4200
> URL: https://issues.apache.org/jira/browse/YARN-4200
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Li Lu
>Priority: Minor
>  Labels: yarn-2928-1st-milestone
> Attachments: YARN-4200-YARN-2928.001.patch, 
> YARN-4200-feature-YARN-2928.002.patch, YARN-4200-feature-YARN-2928.003.patch
>
>
> As suggested by [~gtCarrera9] in YARN-4074, filing jira to refactor the code 
> to group together the reader classes under a package in storage that 
> indicates these are hbase specific. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4335) Allow ResourceRequests to specify ExecutionType of a request ask

2016-01-07 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15088371#comment-15088371
 ] 

Arun Suresh commented on YARN-4335:
---

Committed this to yarn-2877 after Fixing some of the whitespace warnings.

> Allow ResourceRequests to specify ExecutionType of a request ask
> 
>
> Key: YARN-4335
> URL: https://issues.apache.org/jira/browse/YARN-4335
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
> Attachments: YARN-4335-yarn-2877.001.patch, YARN-4335.002.patch, 
> YARN-4335.003.patch
>
>
> YARN-2882 introduced container types that are internal (not user-facing) and 
> are used by the ContainerManager during execution at the NM.
> With this JIRA we are introducing (user-facing) resource request types that 
> are used by the AM to specify the type of the ResourceRequest.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-4335) Allow ResourceRequests to specify ExecutionType of a request ask

2016-01-07 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4335?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-4335:
--
Release Note:   (was: Committed this to yarn-2877 after Fixing some of the 
whitespace warnings.)

> Allow ResourceRequests to specify ExecutionType of a request ask
> 
>
> Key: YARN-4335
> URL: https://issues.apache.org/jira/browse/YARN-4335
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
> Attachments: YARN-4335-yarn-2877.001.patch, YARN-4335.002.patch, 
> YARN-4335.003.patch
>
>
> YARN-2882 introduced container types that are internal (not user-facing) and 
> are used by the ContainerManager during execution at the NM.
> With this JIRA we are introducing (user-facing) resource request types that 
> are used by the AM to specify the type of the ResourceRequest.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4062) Add the flush and compaction functionality via coprocessors and scanners for flow run table

2016-01-07 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15088407#comment-15088407
 ] 

Vrushali C commented on YARN-4062:
--

Note: filed jira YARN-4561 for adding in enhancements like ability to turn 
on/off compaction at runtime, allowing for certain whitelisting as well as 
blacklisting of records to be processed, etc.

> Add the flush and compaction functionality via coprocessors and scanners for 
> flow run table
> ---
>
> Key: YARN-4062
> URL: https://issues.apache.org/jira/browse/YARN-4062
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Vrushali C
>  Labels: yarn-2928-1st-milestone
> Attachments: YARN-4062-YARN-2928.1.patch, 
> YARN-4062-feature-YARN-2928.01.patch
>
>
> As part of YARN-3901, coprocessor and scanner is being added for storing into 
> the flow_run table. It also needs a flush & compaction processing in the 
> coprocessor and perhaps a new scanner to deal with the data during flushing 
> and compaction stages. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4062) Add the flush and compaction functionality via coprocessors and scanners for flow run table

2016-01-07 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15088436#comment-15088436
 ] 

Sangjin Lee commented on YARN-4062:
---

Thanks [~vrushalic] for the work! She and I went over the patch offline. Here 
are some comments on the patch. Please look at the javac, findbugs, checkstyle, 
javadoc, whitespace issues too.

(TimelineStorageUtils.java)
- l.572: I'm not too clear about this; should we break out of the for loop once 
we find the app id?

(TimestampGenerator.java)
- l.36: should we make this a round million?
- we should review the javadoc, and update it accordingly

(FlowRunCoprocessor.java)
- l.46: KeyValyeScanner is an unused import
- l.58: LOG is now used, so the annotation should be removed
- l.272-273: nit: you can simply do
{code}
request.isMajor() ? FlowScannerOperation.MAJOR_COMPACTION :
FlowScannerOperation.MINOR_COMPACTION);
{code}

(FlowScanner.java)
- l.67: can you add this parameter as a real configuration (in 
YarnConfiguration.java/yarn-default.xml)?
- l.79: scanType is not used anywhere in the code?
- l.93: the instanceof operator returns false if the left operand is null, so 
the null check is superfluous
- l.131: getAggregationCompactionDimension() should be removed
- l.433: we discussed separating the logic of processing summation into a 
separate class so that it is more amenable to unit tests; it is not a strong 
preference, so please use your discretion to decide whether that is better
- l.447: typo: "order" -> "older"
- l.471: shouldn't we use NumericValueConverter.add() to add the values to 
maintain the generic nature of the types?
- l.490: nit: since FLOW_APP_ID is a static, we shouldn't use the "this" 
qualifier
- l.493: the same nit as above
- l.576: FlowScanner still has createNewCell(); it should be removed, right?

(TestFlowDataGenerator.java)
- l.21: unused import
- l.32: incorrect import (should be apache commons logging log)

(TestHBaseStorageFlowRunCompaction.java)
- l.23-82: there are 16 unused imports
- l.95-97: these 3 variables are unused
- l.166: admin is unused
- l.232-233: see above
- l.255: observer context is unused
- l.332: typo: "FINAl" -> "FINAL"
- l.414: typo: "FINAl" -> "FINAL"

We discussed how the co-processor can disable the compaction behavior without 
restarting the region server; (1) there should be a configuration/flag that 
enables and disables the compaction behavior, and (2) the co-processor should 
be able to pick up the flag value without restarting the region server. We 
don't need to do it as part of this JIRA. We can do it in YARN-4561.

(2) We also discussed how to handle the replication scenarios. It might be a 
good idea to use some kind of a regex matching against the cluster name as a 
way to whitelist or blacklist clusters for the compaction behavior. We can 
probably handle it in YARN-4561 as well.

> Add the flush and compaction functionality via coprocessors and scanners for 
> flow run table
> ---
>
> Key: YARN-4062
> URL: https://issues.apache.org/jira/browse/YARN-4062
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Vrushali C
>  Labels: yarn-2928-1st-milestone
> Attachments: YARN-4062-YARN-2928.1.patch, 
> YARN-4062-feature-YARN-2928.01.patch
>
>
> As part of YARN-3901, coprocessor and scanner is being added for storing into 
> the flow_run table. It also needs a flush & compaction processing in the 
> coprocessor and perhaps a new scanner to deal with the data during flushing 
> and compaction stages. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4476) Matcher for complex node label expresions

2016-01-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15088462#comment-15088462
 ] 

Hadoop QA commented on YARN-4476:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
30s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 25s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 30s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
12s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 36s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
10s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 20s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 28s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
30s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 23s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 23s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 27s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 27s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 12s 
{color} | {color:red} Patch generated 23 new checkstyle issues in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 (total was 0, now 23). {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 33s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 23s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 58m 52s 
{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed 
with JDK v1.8.0_66. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 60m 1s 
{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed 
with JDK v1.7.0_91. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
18s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 135m 53s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12781072/YARN-4476-2.patch |
| JIRA Issue | YARN-4476 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 70d8dc7d95a5 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Updated] (YARN-4559) Make leader elector and zk store share the same curator client

2016-01-07 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4559?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-4559:
--
Attachment: YARN-4559.1.patch

> Make leader elector and zk store share the same curator client
> --
>
> Key: YARN-4559
> URL: https://issues.apache.org/jira/browse/YARN-4559
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jian He
>Assignee: Jian He
> Attachments: YARN-4559.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4180) AMLauncher does not retry on failures when talking to NM

2016-01-07 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15088346#comment-15088346
 ] 

Karthik Kambatla commented on YARN-4180:


Cherry-picked to 2.6.4 as well. Thanks for the ping, Junping. 

> AMLauncher does not retry on failures when talking to NM 
> -
>
> Key: YARN-4180
> URL: https://issues.apache.org/jira/browse/YARN-4180
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.7.1
>Reporter: Anubhav Dhoot
>Assignee: Anubhav Dhoot
>Priority: Critical
> Fix For: 2.7.2, 2.6.4
>
> Attachments: YARN-4180-branch-2.7.2.txt, YARN-4180.001.patch, 
> YARN-4180.002.patch, YARN-4180.002.patch, YARN-4180.002.patch
>
>
> We see issues with RM trying to launch a container while a NM is restarting 
> and we get exceptions like NMNotReadyException. While YARN-3842 added retry 
> for other clients of NM (AMs mainly) its not used by AMLauncher in RM causing 
> there intermittent errors to cause job failures. This can manifest during 
> rolling restart of NMs. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-4180) AMLauncher does not retry on failures when talking to NM

2016-01-07 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated YARN-4180:
---
Fix Version/s: 2.6.4

> AMLauncher does not retry on failures when talking to NM 
> -
>
> Key: YARN-4180
> URL: https://issues.apache.org/jira/browse/YARN-4180
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.7.1
>Reporter: Anubhav Dhoot
>Assignee: Anubhav Dhoot
>Priority: Critical
> Fix For: 2.7.2, 2.6.4
>
> Attachments: YARN-4180-branch-2.7.2.txt, YARN-4180.001.patch, 
> YARN-4180.002.patch, YARN-4180.002.patch, YARN-4180.002.patch
>
>
> We see issues with RM trying to launch a container while a NM is restarting 
> and we get exceptions like NMNotReadyException. While YARN-3842 added retry 
> for other clients of NM (AMs mainly) its not used by AMLauncher in RM causing 
> there intermittent errors to cause job failures. This can manifest during 
> rolling restart of NMs. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4559) Make leader elector and zk store share the same curator client

2016-01-07 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15088403#comment-15088403
 ] 

Karthik Kambatla commented on YARN-4559:


Quickly skimmed through the patch. High-level comments:
# Curator start and close should be in the same class. Let us do that both in 
the RM or both in the elector service. My preference would be for doing it in 
the RM. Is there any reason that wouldn't work? 
# The ZK-store specific configs - adding the RM username:password for 
exclusivity should likely be done in the serviceInit of the store. 

> Make leader elector and zk store share the same curator client
> --
>
> Key: YARN-4559
> URL: https://issues.apache.org/jira/browse/YARN-4559
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jian He
>Assignee: Jian He
> Attachments: YARN-4559.1.patch
>
>
> After YARN-4438, we can reuse the same curator client for leader elector and 
> zk store



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (YARN-4561) Compaction coprocessor enhancements: On/Off, whitelisting, blacklisting

2016-01-07 Thread Vrushali C (JIRA)
Vrushali C created YARN-4561:


 Summary: Compaction coprocessor enhancements: On/Off, 
whitelisting, blacklisting
 Key: YARN-4561
 URL: https://issues.apache.org/jira/browse/YARN-4561
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Vrushali C
Assignee: Vrushali C



YARN-4062 deals with the flush and compaction related coprocessor basic 
functionality. We also need to ensure we can turn compaction on/off as a whole 
(in case of dealing with production issues) as well as provide a way to allow 
for blacklisting and whitelisting of processing compaction for certain records.

For instance, we may want to compact only those records which belong to 
applications in that datacenter. This way we donot interfere with hbase 
replication causing coprocessors to process the same record in more than one dc 
at the same time.

Also, we might want to not compact/process certain records, perhaps whose 
rowkey matches a certain criteria.

Filing jira to track these enhancements




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4304) AM max resource configuration per partition to be displayed/updated correctly in UI and in various partition related metrics

2016-01-07 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15088458#comment-15088458
 ] 

Wangda Tan commented on YARN-4304:
--

Hi Sunil,

Sorry for my late response, I thought I replied this comment.

Is using non-primitive type such as Float able to solve the problem?

> AM max resource configuration per partition to be displayed/updated correctly 
> in UI and in various partition related metrics
> 
>
> Key: YARN-4304
> URL: https://issues.apache.org/jira/browse/YARN-4304
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: webapp
>Affects Versions: 2.7.1
>Reporter: Sunil G
>Assignee: Sunil G
> Attachments: 0001-YARN-4304.patch, 0002-YARN-4304.patch, 
> 0003-YARN-4304.patch, 0004-YARN-4304.patch, 0005-YARN-4304.patch, 
> 0005-YARN-4304.patch, 0006-YARN-4304.patch, 0007-YARN-4304.patch, 
> 0008-YARN-4304.patch, 0009-YARN-4304.patch, REST_and_UI.zip
>
>
> As we are supporting per-partition level max AM resource percentage 
> configuration, UI and various metrics also need to display correct 
> configurations related to same. 
> For eg: Current UI still shows am-resource percentage per queue level. This 
> is to be updated correctly when label config is used.
> - Display max-am-percentage per-partition in Scheduler UI (label also) and in 
> ClusterMetrics page
> - Update queue/partition related metrics w.r.t per-partition 
> am-resource-percentage



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3995) Some of the NM events are not getting published due race condition when AM container finishes in NM

2016-01-07 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15088490#comment-15088490
 ] 

Naganarasimha G R commented on YARN-3995:
-

bq. Instead of spawning multiple threads may be we can have single thread which 
does this activity ?
Yes i wanted to address it as i was trying to point out earlier ??Instead of 
spawning multiple threads may be we can have single thread which does this 
activity??

bq. How about creating a long-lived single ScheduledExecutorService and 
schedule removeApplication() with the specified delay?
IIUC the approach you mentioned in the callable we will be sleeping for the 
configured period for a application and then remove it. but if multiple apps at 
the same time finish then initial apps only wait for configured period but 
subsequent apps wait for lil more time than the earlier ones.(app's wait period 
+ other apps wait period in the queue ) thoughts?
Some approaches i can adopt to avoid the above issue are :
* Have the timestamp when *close AM container* was called in the callable, and 
in the callable we can have code to wait only if the elapsed time < configured 
linger time.
* Have a map and a single thread(either executor service/ 
timer task) with lower interval like 500ms and it can check this map and remove 
all the apps whose elapsed time is > configured linger time.
thoughts ?

> Some of the NM events are not getting published due race condition when AM 
> container finishes in NM 
> 
>
> Key: YARN-3995
> URL: https://issues.apache.org/jira/browse/YARN-3995
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, timelineserver
>Affects Versions: YARN-2928
>Reporter: Naganarasimha G R
>Assignee: Naganarasimha G R
>  Labels: yarn-2928-1st-milestone
> Attachments: YARN-3995-feature-YARN-2928.v1.001.patch
>
>
> As discussed in YARN-3045:  While testing in TestDistributedShell found out 
> that few of the container metrics events were failing as there will be race 
> condition. When the AM container finishes and removes the collector for the 
> app, still there is possibility that all the events published for the app by 
> the current NM and other NM are still in pipeline, 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-4543) TestNodeStatusUpdater.testStopReentrant fails + JUnit misusage

2016-01-07 Thread Akihiro Suda (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4543?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akihiro Suda updated YARN-4543:
---
Attachment: YARN-4543-1.patch

Attached YARN-4543-1.patch.
As with YARN-4548, I verified the patch using my tool that makes a noise to 
thread interleaving: 
https://github.com/AkihiroSuda/MicroEarthquake/tree/4367ec9d098c8943e87933e473f8206aecbd63b0

> TestNodeStatusUpdater.testStopReentrant fails + JUnit misusage
> --
>
> Key: YARN-4543
> URL: https://issues.apache.org/jira/browse/YARN-4543
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Reporter: Akihiro Suda
>Priority: Minor
> Attachments: YARN-4543-1.patch
>
>
> {panel}
> TestNodeStatusUpdater.testStopReentrant:1269 expected:<0> but was:<1>
> {panel}
> https://github.com/apache/hadoop/blob/4ac6799d4a8b071e0d367c2d709e84d8ea06942d/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/TestNodeStatusUpdater.java#L1269
> The corresponding  JUnit assertion code is: 
> {panel}
>  Assert.assertEquals(numCleanups.get(), 1);
> {panel}
> It seems that the 1st arg and the 2nd one should be swapped.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4393) TestResourceLocalizationService#testFailedDirsResourceRelease fails intermittently

2016-01-07 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15087070#comment-15087070
 ] 

Varun Saxena commented on YARN-4393:


Thanks [~rohithsharma] for the review and commit.
Thanks [~ozawa] for the review and originally reporting this issue(while 
reviewing YARN-4380)

> TestResourceLocalizationService#testFailedDirsResourceRelease fails 
> intermittently
> --
>
> Key: YARN-4393
> URL: https://issues.apache.org/jira/browse/YARN-4393
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: test
>Affects Versions: 2.7.1
>Reporter: Varun Saxena
>Assignee: Varun Saxena
>  Labels: test
> Fix For: 2.9.0
>
> Attachments: YARN-4393.01.patch
>
>
> [~ozawa] pointed out this failure on YARN-4380.
> Check 
> https://issues.apache.org/jira/browse/YARN-4380?focusedCommentId=15023773=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15023773
> {noformat}
> sts run: 14, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 5.518 sec <<< 
> FAILURE! - in 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.TestResourceLocalizationService
> testFailedDirsResourceRelease(org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.TestResourceLocalizationService)
>  Time elapsed: 0.093 sec <<< FAILURE!
> org.mockito.exceptions.verification.junit.ArgumentsAreDifferent:
> Argument(s) are different! Wanted:
> eventHandler.handle(
> 
> );
> -> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.TestResourceLocalizationService.testFailedDirsResourceRelease(TestResourceLocalizationService.java:2632)
> Actual invocation has different arguments:
> eventHandler.handle(
> EventType: APPLICATION_INITED
> );
> -> at 
> org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:183)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.TestResourceLocalizationService.testFailedDirsResourceRelease(TestResourceLocalizationService.java:2632)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4306) Test failure: TestClientRMTokens

2016-01-07 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4306?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15087109#comment-15087109
 ] 

Rohith Sharma K S commented on YARN-4306:
-

After HADOOP-12687, this test case is passing. We can close it as duplicate of 
HADOOP-12687.

Test case reference that is passing in HadoopQA 
[report|https://issues.apache.org/jira/browse/YARN-4538?focusedCommentId=15087024=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15087024]

> Test failure: TestClientRMTokens
> 
>
> Key: YARN-4306
> URL: https://issues.apache.org/jira/browse/YARN-4306
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: test
>Reporter: Sunil G
>Assignee: Sunil G
>
> Tests are getting failed in local also. As part of HADOOP-12321 jenkins run, 
> I see same error.:
> {noformat}testShortCircuitRenewCancelDifferentHostSamePort(org.apache.hadoop.yarn.server.resourcemanager.TestClientRMTokens)
>   Time elapsed: 0.638 sec  <<< FAILURE!
> java.lang.AssertionError: expected: but was:
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:144)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.TestClientRMTokens.checkShortCircuitRenewCancel(TestClientRMTokens.java:363)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.TestClientRMTokens.testShortCircuitRenewCancelDifferentHostSamePort(TestClientRMTokens.java:316)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4557) missed NonPartitioned Request Scheduling Opportunity is not correctly checking for all priorities of an app

2016-01-07 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15087155#comment-15087155
 ] 

Naganarasimha G R commented on YARN-4557:
-

Along with it there were two minor issues:
* In {{AppSchedulingInfo}} comparator field doesn't have generics
* {{TestNodeLabelContainerAllocation.testResourceRequestUpdateNodePartitions}} 
has unused variable

Will fix above two also...

> missed NonPartitioned Request Scheduling Opportunity is not correctly 
> checking for all priorities of an app
> ---
>
> Key: YARN-4557
> URL: https://issues.apache.org/jira/browse/YARN-4557
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api, client, resourcemanager
>Reporter: Naganarasimha G R
>Assignee: Naganarasimha G R
>Priority: Minor
>
> When app has submitted requests for multiple priority in default partition, 
> then if one of the priority requests has missed  
> non-partitioned-resource-request equivalent to cluster size then container 
> needs to be allocated. Currently if the higher priority requests doesn't 
> satisfy the condition, then whole application is getting skipped instead the 
> priority



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4371) "yarn application -kill" should take multiple application ids

2016-01-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4371?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15087154#comment-15087154
 ] 

Hadoop QA commented on YARN-4371:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
28s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 15s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 18s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
9s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 24s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
33s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
18s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 13s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 13s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 16s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 16s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 9s 
{color} | {color:red} Patch generated 3 new checkstyle issues in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client (total was 15, now 17). 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 20s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
37s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 11s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 21s 
{color} | {color:green} hadoop-yarn-client in the patch passed with JDK 
v1.8.0_66. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 38s 
{color} | {color:green} hadoop-yarn-client in the patch passed with JDK 
v1.7.0_91. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
16s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 28m 44s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12780944/0003-YARN-4371.patch |
| JIRA Issue | YARN-4371 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 0a3b5daecadb 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 

[jira] [Commented] (YARN-4318) Test failure: TestAMAuthorization

2016-01-07 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15087117#comment-15087117
 ] 

Rohith Sharma K S commented on YARN-4318:
-

After HADOOP-12687, this test case is passing. We can close it as duplicate of 
HADOOP-12687.
Test case reference that is passing in HadoopQA 
[report|https://issues.apache.org/jira/browse/YARN-4538?focusedCommentId=15087024=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15087024]

> Test failure: TestAMAuthorization
> -
>
> Key: YARN-4318
> URL: https://issues.apache.org/jira/browse/YARN-4318
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: test
>Affects Versions: 3.0.0
> Environment: jenkins
>Reporter: Tsuyoshi Ozawa
>Assignee: Kuhu Shukla
>
> {quote}
> Tests run: 4, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 14.891 sec 
> <<< FAILURE! - in 
> org.apache.hadoop.yarn.server.resourcemanager.TestAMAuthorization
> testUnauthorizedAccess[0](org.apache.hadoop.yarn.server.resourcemanager.TestAMAuthorization)
>   Time elapsed: 3.208 sec  <<< ERROR!
> java.net.UnknownHostException: Invalid host name: local host is: (unknown); 
> destination host is: "b5a5dd9ec835":8030; java.net.UnknownHostException; For 
> more details see:  http://wiki.apache.org/hadoop/UnknownHost
>   at org.apache.hadoop.ipc.Client$Connection.(Client.java:403)
>   at org.apache.hadoop.ipc.Client.getConnection(Client.java:1512)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1439)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1400)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
>   at com.sun.proxy.$Proxy15.registerApplicationMaster(Unknown Source)
>   at 
> org.apache.hadoop.yarn.api.impl.pb.client.ApplicationMasterProtocolPBClientImpl.registerApplicationMaster(ApplicationMasterProtocolPBClientImpl.java:106)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.TestAMAuthorization.testUnauthorizedAccess(TestAMAuthorization.java:273)
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (YARN-4318) Test failure: TestAMAuthorization

2016-01-07 Thread Rohith Sharma K S (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4318?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith Sharma K S resolved YARN-4318.
-
Resolution: Duplicate

Closing as duplicate, reopen if test case is still failing

> Test failure: TestAMAuthorization
> -
>
> Key: YARN-4318
> URL: https://issues.apache.org/jira/browse/YARN-4318
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: test
>Affects Versions: 3.0.0
> Environment: jenkins
>Reporter: Tsuyoshi Ozawa
>Assignee: Kuhu Shukla
>
> {quote}
> Tests run: 4, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 14.891 sec 
> <<< FAILURE! - in 
> org.apache.hadoop.yarn.server.resourcemanager.TestAMAuthorization
> testUnauthorizedAccess[0](org.apache.hadoop.yarn.server.resourcemanager.TestAMAuthorization)
>   Time elapsed: 3.208 sec  <<< ERROR!
> java.net.UnknownHostException: Invalid host name: local host is: (unknown); 
> destination host is: "b5a5dd9ec835":8030; java.net.UnknownHostException; For 
> more details see:  http://wiki.apache.org/hadoop/UnknownHost
>   at org.apache.hadoop.ipc.Client$Connection.(Client.java:403)
>   at org.apache.hadoop.ipc.Client.getConnection(Client.java:1512)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1439)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1400)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
>   at com.sun.proxy.$Proxy15.registerApplicationMaster(Unknown Source)
>   at 
> org.apache.hadoop.yarn.api.impl.pb.client.ApplicationMasterProtocolPBClientImpl.registerApplicationMaster(ApplicationMasterProtocolPBClientImpl.java:106)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.TestAMAuthorization.testUnauthorizedAccess(TestAMAuthorization.java:273)
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (YARN-4557) missed NonPartitioned Request Scheduling Opportunity is not correctly checking for all priorities of an app

2016-01-07 Thread Naganarasimha G R (JIRA)
Naganarasimha G R created YARN-4557:
---

 Summary: missed NonPartitioned Request Scheduling Opportunity is 
not correctly checking for all priorities of an app
 Key: YARN-4557
 URL: https://issues.apache.org/jira/browse/YARN-4557
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Naganarasimha G R
Assignee: Naganarasimha G R
Priority: Minor


When app has submitted requests for multiple priority in default partition, 
then if one of the priority requests has missed  
non-partitioned-resource-request equivalent to cluster size then container 
needs to be allocated. Currently if the higher priority requests doesn't 
satisfy the condition, then whole application is getting skipped instead the 
priority



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-4557) missed NonPartitioned Request Scheduling Opportunity is not correctly checking for all priorities of an app

2016-01-07 Thread Naganarasimha G R (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4557?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naganarasimha G R updated YARN-4557:

Attachment: YARN-4557.v1.001.patch

Fixing the above issues !

> missed NonPartitioned Request Scheduling Opportunity is not correctly 
> checking for all priorities of an app
> ---
>
> Key: YARN-4557
> URL: https://issues.apache.org/jira/browse/YARN-4557
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api, client, resourcemanager
>Reporter: Naganarasimha G R
>Assignee: Naganarasimha G R
>Priority: Minor
> Attachments: YARN-4557.v1.001.patch
>
>
> When app has submitted requests for multiple priority in default partition, 
> then if one of the priority requests has missed  
> non-partitioned-resource-request equivalent to cluster size then container 
> needs to be allocated. Currently if the higher priority requests doesn't 
> satisfy the condition, then whole application is getting skipped instead the 
> priority



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4549) Containers stuck in KILLING state

2016-01-07 Thread Danil Serdyuchenko (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15087217#comment-15087217
 ] 

Danil Serdyuchenko commented on YARN-4549:
--

We did some more digging and found that a few containers that are currently in 
RUNNING state, are missing directories under {{nmPrivate}} dir. The web 
interface reports that the containers are running on that node and the 
container processes are there too, but we are missing the the entire 
application dir under {{nmPrivate}}.

[~jlowe] This usually happens to long running containers. The PID files are 
missing for containers in KILLING state, and for certain RUNNING containers. 
The pid file should be under {{nm-local-dir}}, for us it's: 
{{/tmp/hadoop-ec2-user/nm-local-dir/nmPrivate///.pid}}.

> Containers stuck in KILLING state
> -
>
> Key: YARN-4549
> URL: https://issues.apache.org/jira/browse/YARN-4549
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.7.1
>Reporter: Danil Serdyuchenko
>
> We are running samza 0.8 on YARN 2.7.1 with {{LinuxContainerExecutor}} as the 
> container-executor with cgroups configuration. Also we have NM recovery 
> enabled.
> We observe a lot of containers that get stuck in the KIILLING state after the 
> NM tries to kill them. The container remains running indefinitely, this 
> causes some duplication as new containers are brought up to replace them. 
> Looking through the logs NM can't seem to get the container PID.
> {noformat}
> 16/01/05 05:16:44 INFO containermanager.ContainerManagerImpl: Stopping 
> container with container Id: container_1448454866800_0023_01_05
> 16/01/05 05:16:44 INFO nodemanager.NMAuditLogger: USER=ec2-user 
> IP=10.51.111.243OPERATION=Stop Container Request
> TARGET=ContainerManageImpl  RESULT=SUCCESS  
> APPID=application_1448454866800_0023
> CONTAINERID=container_1448454866800_0023_01_05
> 16/01/05 05:16:44 INFO container.ContainerImpl: Container 
> container_1448454866800_0023_01_05 transitioned from RUNNING to KILLING
> 16/01/05 05:16:44 INFO launcher.ContainerLaunch: Cleaning up container 
> container_1448454866800_0023_01_05
> 16/01/05 05:16:47 INFO launcher.ContainerLaunch: Could not get pid for 
> container_1448454866800_0023_01_05. Waited for 2000 ms.
> {noformat}
> The PID files for each container seem to be present on the node. We waren't 
> able to consistently replicate this and hoping that someone has come across 
> this before.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4544) All the log messages about rolling monitoring interval are shown with the WARN level

2016-01-07 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15087186#comment-15087186
 ] 

Akira AJISAKA commented on YARN-4544:
-

Thanks [~bwtakacy] for reporting this and creating the patch. It looks good to 
me.
I found a typo near the change. Would you fix it?
{code}
LOG.info("rollingMonitorInterval is set as "
+ configuredRollingMonitorInterval + ". "
+ "The log rolling mornitoring interval is disabled. "
+ "The logs will be aggregated after this application is 
finished.");
{code}
mornitoring should be monitoring.

> All the log messages about rolling monitoring interval are shown with the 
> WARN level
> 
>
> Key: YARN-4544
> URL: https://issues.apache.org/jira/browse/YARN-4544
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: log-aggregation, nodemanager
>Affects Versions: 2.7.1
>Reporter: Takashi Ohnishi
> Attachments: YARN-4544.patch
>
>
> About yarn.nodemanager.log-aggregation.roll-monitoring-interval-seconds, 
> there are three log messages corresponding to the value set to this 
> parameter, but all of them are shown with the WARN level.
> (a) disabled (default)
> {code}
> 2016-01-05 22:19:29,062 WARN 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggreg 
> ation.AppLogAggregatorImpl: rollingMonitorInterval is set as -1. The log 
> rolling mornitoring interval is disabled. The logs will be aggregated after 
> this application is finished.
> {code}
> (b) enabled
> {code}
> 2016-01-06 00:41:15,808 WARN 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.AppLogAggregatorImpl:
>  rollingMonitorInterval is set as 7200. The logs will be aggregated every 
> 7200 seconds
> {code}
> (c) enabled but wrong configuration
> {code}
> 2016-01-06 00:39:50,820 WARN 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.AppLogAggregatorImpl:
>  rollingMonitorIntervall should be more than or equal to 3600 seconds. Using 
> 3600 seconds instead.
> {code}
> I think it is better to output with WARN only in case (c), but it is ok to 
> output with INFO in case (a) and (b).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4548) TestCapacityScheduler.testRecoverRequestAfterPreemption fails with NPE

2016-01-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4548?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15087183#comment-15087183
 ] 

Hadoop QA commented on YARN-4548:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
32s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 27s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 31s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 36s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 21s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 29s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
33s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 28s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 28s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 29s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 29s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 34s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
18s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 19s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 24s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 59m 18s 
{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed 
with JDK v1.8.0_66. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 60m 8s 
{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed 
with JDK v1.7.0_91. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
19s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 136m 57s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12780935/YARN-4548-2.patch |
| JIRA Issue | YARN-4548 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux ec31525be61f 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 34cd7cd |
| Default Java | 1.7.0_91 |
| Multi-JDK versions |  

[jira] [Updated] (YARN-4544) All the log messages about rolling monitoring interval are shown with the WARN level

2016-01-07 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4544?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated YARN-4544:

Assignee: Takashi Ohnishi

> All the log messages about rolling monitoring interval are shown with the 
> WARN level
> 
>
> Key: YARN-4544
> URL: https://issues.apache.org/jira/browse/YARN-4544
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: log-aggregation, nodemanager
>Affects Versions: 2.7.1
>Reporter: Takashi Ohnishi
>Assignee: Takashi Ohnishi
> Attachments: YARN-4544.patch
>
>
> About yarn.nodemanager.log-aggregation.roll-monitoring-interval-seconds, 
> there are three log messages corresponding to the value set to this 
> parameter, but all of them are shown with the WARN level.
> (a) disabled (default)
> {code}
> 2016-01-05 22:19:29,062 WARN 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggreg 
> ation.AppLogAggregatorImpl: rollingMonitorInterval is set as -1. The log 
> rolling mornitoring interval is disabled. The logs will be aggregated after 
> this application is finished.
> {code}
> (b) enabled
> {code}
> 2016-01-06 00:41:15,808 WARN 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.AppLogAggregatorImpl:
>  rollingMonitorInterval is set as 7200. The logs will be aggregated every 
> 7200 seconds
> {code}
> (c) enabled but wrong configuration
> {code}
> 2016-01-06 00:39:50,820 WARN 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.AppLogAggregatorImpl:
>  rollingMonitorIntervall should be more than or equal to 3600 seconds. Using 
> 3600 seconds instead.
> {code}
> I think it is better to output with WARN only in case (c), but it is ok to 
> output with INFO in case (a) and (b).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-4548) TestCapacityScheduler.testRecoverRequestAfterPreemption fails with NPE

2016-01-07 Thread Akihiro Suda (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4548?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akihiro Suda updated YARN-4548:
---
Attachment: YARN-4548-2.patch

Fixed whitespaces. (YARN-4548-2.patch).

The last JUnit failure in YARN-4548-1.patch is unrelated (YARN-4556)


> TestCapacityScheduler.testRecoverRequestAfterPreemption fails with NPE
> --
>
> Key: YARN-4548
> URL: https://issues.apache.org/jira/browse/YARN-4548
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Akihiro Suda
> Attachments: YARN-4548-1.patch, YARN-4548-2.patch, yarn-4548.log
>
>
> {code}
> testRecoverRequestAfterPreemption(org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacityScheduler)
>   Time elapsed: 5.552 sec 
> <<< ERROR!
> java.lang.NullPointerException: null
>at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacityScheduler.testRecoverRequestAfterPreemption(TestCapacitySch
> eduler.java:1263)
> {code}
> https://github.com/apache/hadoop/blob/d36b6e045f317c94e97cb41a163aa974d161a404/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCapacityScheduler.java#L1260-L1263
> Jenkins also hit this two months ago: 
> https://mail-archives.apache.org/mod_mbox/hadoop-yarn-dev/201510.mbox/%3C1100047319.7290.1446252743553.JavaMail.jenkins@crius%3E
> My Hadoop version: 4e4b3a8465a8433e78e015cb1ce7e0dc1ebeb523 (Dec 30, 2015)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4545) Allow YARN distributed shell to use ATS v1.5 APIs

2016-01-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15087074#comment-15087074
 ] 

Hadoop QA commented on YARN-4545:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
44s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 20s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 58s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
4s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 23s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
49s {color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 10s 
{color} | {color:red} branch/hadoop-project no findbugs output file 
(hadoop-project/target/findbugsXml.xml) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 14s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 26s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 8s 
{color} | {color:red} hadoop-yarn-applications-distributedshell in the patch 
failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 4m 5s 
{color} | {color:red} root in the patch failed with JDK v1.8.0_66. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 4m 5s {color} 
| {color:red} root in the patch failed with JDK v1.8.0_66. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 4m 22s 
{color} | {color:red} root in the patch failed with JDK v1.7.0_91. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 4m 22s {color} 
| {color:red} root in the patch failed with JDK v1.7.0_91. {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 44s 
{color} | {color:red} Patch generated 2 new checkstyle issues in root (total 
was 257, now 214). {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 9s 
{color} | {color:red} hadoop-yarn-applications-distributedshell in the patch 
failed. {color} |
| {color:red}-1{color} | {color:red} mvneclipse {color} | {color:red} 0m 9s 
{color} | {color:red} hadoop-yarn-applications-distributedshell in the patch 
failed. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 9s 
{color} | {color:red} patch/hadoop-project no findbugs output file 
(hadoop-project/target/findbugsXml.xml) {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 29s 
{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
introduced 1 new FindBugs issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 9s 
{color} | {color:red} hadoop-yarn-applications-distributedshell in the patch 
failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 7s 
{color} | {color:red} hadoop-yarn-applications-distributedshell in the patch 
failed with JDK v1.8.0_66. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 9s 
{color} | {color:red} hadoop-yarn-applications-distributedshell in the patch 
failed with JDK v1.7.0_91. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 7s 
{color} | {color:green} hadoop-project in the patch passed with JDK v1.8.0_66. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 20s 
{color} | {color:green} hadoop-yarn-api in the patch passed with JDK v1.8.0_66. 
{color} |
| {color:green}+1{color} | {color:green} 

[jira] [Commented] (YARN-4544) All the log messages about rolling monitoring interval are shown with the WARN level

2016-01-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15087370#comment-15087370
 ] 

Hadoop QA commented on YARN-4544:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
28s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 25s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 27s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
11s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 29s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
54s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 22s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
23s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 23s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 23s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 25s 
{color} | {color:green} the patch passed with JDK vdenied {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 25s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 26s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
59s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 19s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 26s 
{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed with 
JDK v1.8.0_66. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 56s 
{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed with 
JDK v1.7.0_91. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
17s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 32m 58s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12780961/YARN-4544.2.patch |
| JIRA Issue | YARN-4544 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 23ccad12959a 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Updated] (YARN-4538) QueueMetrics pending cores and memory metrics wrong

2016-01-07 Thread Bibin A Chundatt (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bibin A Chundatt updated YARN-4538:
---
Attachment: 0003-YARN-4538.patch

Attaching patch after checkstyle fix

> QueueMetrics pending  cores and memory metrics wrong
> 
>
> Key: YARN-4538
> URL: https://issues.apache.org/jira/browse/YARN-4538
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
> Attachments: 0001-YARN-4538.patch, 0002-YARN-4538.patch, 
> 0003-YARN-4538.patch
>
>
> Submit 2 application to default queue 
> Check queue metrics for pending cores and memory
> {noformat}
> List allQueues = client.getChildQueueInfos("root");
> for (QueueInfo queueInfo : allQueues) {
>   QueueStatistics quastats = queueInfo.getQueueStatistics();
>   System.out.println(quastats.getPendingVCores());
>   System.out.println(quastats.getPendingMemoryMB());
> }
> {noformat}
> *Output :*
> -20
> -20480



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4238) createdTime and modifiedTime is not reported while publishing entities to ATSv2

2016-01-07 Thread Sreenath Somarajapuram (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15087527#comment-15087527
 ] 

Sreenath Somarajapuram commented on YARN-4238:
--

Thanks [~varun_saxena]
Also is it possible to query records in ascending & descending order of 
creation?

> createdTime and modifiedTime is not reported while publishing entities to 
> ATSv2
> ---
>
> Key: YARN-4238
> URL: https://issues.apache.org/jira/browse/YARN-4238
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Varun Saxena
>Assignee: Varun Saxena
>  Labels: yarn-2928-1st-milestone
> Attachments: YARN-4238-YARN-2928.01.patch, 
> YARN-4238-feature-YARN-2928.002.patch, YARN-4238-feature-YARN-2928.003.patch, 
> YARN-4238-feature-YARN-2928.02.patch
>
>
> While publishing entities from RM and elsewhere we are not sending created 
> time. For instance, created time in TimelineServiceV2Publisher class and for 
> other entities in other such similar classes is not updated. We can easily 
> update created time when sending application created event. Likewise for 
> modification time on every write.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4555) TestDefaultContainerExecutor#testContainerLaunchError fails on non-englihsh locale environment

2016-01-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4555?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15087492#comment-15087492
 ] 

Hadoop QA commented on YARN-4555:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
18s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 23s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 28s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
10s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 29s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
54s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 17s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 21s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
23s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 21s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 21s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 24s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 24s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 26s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
59s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 19s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 24s 
{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed with 
JDK v1.8.0_66. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 59s 
{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed with 
JDK v1.7.0_91. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
17s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 32m 43s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12780967/YARN-4555.1.patch |
| JIRA Issue | YARN-4555 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 093b02e4a138 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 6702e7d |
| Default Java | 1.7.0_91 |
| Multi-JDK versions |  

[jira] [Commented] (YARN-4238) createdTime and modifiedTime is not reported while publishing entities to ATSv2

2016-01-07 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15087533#comment-15087533
 ] 

Varun Saxena commented on YARN-4238:


No. Entities will be returned in descending order i.e. most recent first. 

> createdTime and modifiedTime is not reported while publishing entities to 
> ATSv2
> ---
>
> Key: YARN-4238
> URL: https://issues.apache.org/jira/browse/YARN-4238
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Varun Saxena
>Assignee: Varun Saxena
>  Labels: yarn-2928-1st-milestone
> Attachments: YARN-4238-YARN-2928.01.patch, 
> YARN-4238-feature-YARN-2928.002.patch, YARN-4238-feature-YARN-2928.003.patch, 
> YARN-4238-feature-YARN-2928.02.patch
>
>
> While publishing entities from RM and elsewhere we are not sending created 
> time. For instance, created time in TimelineServiceV2Publisher class and for 
> other entities in other such similar classes is not updated. We can easily 
> update created time when sending application created event. Likewise for 
> modification time on every write.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-4168) Test TestLogAggregationService.testLocalFileDeletionOnDiskFull failing

2016-01-07 Thread Takashi Ohnishi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takashi Ohnishi updated YARN-4168:
--
Attachment: YARN-4168.1.patch

> Test TestLogAggregationService.testLocalFileDeletionOnDiskFull failing
> --
>
> Key: YARN-4168
> URL: https://issues.apache.org/jira/browse/YARN-4168
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: test
>Affects Versions: 3.0.0
> Environment: Jenkins
>Reporter: Steve Loughran
>Priority: Critical
> Attachments: YARN-4168.1.patch
>
>
> {{TestLogAggregationService.testLocalFileDeletionOnDiskFull}} failing on 
> [Jenkins build 
> 1136|https://builds.apache.org/view/H-L/view/Hadoop/job/Hadoop-Yarn-trunk/1136/testReport/junit/org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation/TestLogAggregationService/testLocalFileDeletionOnDiskFull/]
> {code}
> {noformat}
> java.lang.AssertionError: null
>   at org.junit.Assert.fail(Assert.java:86)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertFalse(Assert.java:64)
>   at org.junit.Assert.assertFalse(Assert.java:74)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.TestLogAggregationService.verifyLocalFileDeletion(TestLogAggregationService.java:229)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.TestLogAggregationService.testLocalFileDeletionOnDiskFull(TestLogAggregationService.java:285)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4544) All the log messages about rolling monitoring interval are shown with the WARN level

2016-01-07 Thread Takashi Ohnishi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15087325#comment-15087325
 ] 

Takashi Ohnishi commented on YARN-4544:
---

Sure!

I will update the patch:)

> All the log messages about rolling monitoring interval are shown with the 
> WARN level
> 
>
> Key: YARN-4544
> URL: https://issues.apache.org/jira/browse/YARN-4544
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: log-aggregation, nodemanager
>Affects Versions: 2.7.1
>Reporter: Takashi Ohnishi
>Assignee: Takashi Ohnishi
> Attachments: YARN-4544.patch
>
>
> About yarn.nodemanager.log-aggregation.roll-monitoring-interval-seconds, 
> there are three log messages corresponding to the value set to this 
> parameter, but all of them are shown with the WARN level.
> (a) disabled (default)
> {code}
> 2016-01-05 22:19:29,062 WARN 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggreg 
> ation.AppLogAggregatorImpl: rollingMonitorInterval is set as -1. The log 
> rolling mornitoring interval is disabled. The logs will be aggregated after 
> this application is finished.
> {code}
> (b) enabled
> {code}
> 2016-01-06 00:41:15,808 WARN 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.AppLogAggregatorImpl:
>  rollingMonitorInterval is set as 7200. The logs will be aggregated every 
> 7200 seconds
> {code}
> (c) enabled but wrong configuration
> {code}
> 2016-01-06 00:39:50,820 WARN 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.AppLogAggregatorImpl:
>  rollingMonitorIntervall should be more than or equal to 3600 seconds. Using 
> 3600 seconds instead.
> {code}
> I think it is better to output with WARN only in case (c), but it is ok to 
> output with INFO in case (a) and (b).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-4555) TestDefaultContainerExecutor#testContainerLaunchError fails on non-englihsh locale environment

2016-01-07 Thread Takashi Ohnishi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4555?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takashi Ohnishi updated YARN-4555:
--
Attachment: YARN-4555.1.patch

I've created a patch which set LANG=C for container executor shell script.

> TestDefaultContainerExecutor#testContainerLaunchError fails on non-englihsh 
> locale environment
> --
>
> Key: YARN-4555
> URL: https://issues.apache.org/jira/browse/YARN-4555
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.7.1
>Reporter: Takashi Ohnishi
>Assignee: Takashi Ohnishi
>Priority: Minor
> Attachments: YARN-4555.1.patch
>
>
> In my env where LANG=ja_JP.UTF-8, the test fails with 
> {code}
> ---
> Test set: 
> org.apache.hadoop.yarn.server.nodemanager.TestDefaultContainerExecutor
> ---
> Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 1.286 sec <<< 
> FAILURE! - in 
> org.apache.hadoop.yarn.server.nodemanager.TestDefaultContainerExecutor
> testContainerLaunchError(org.apache.hadoop.yarn.server.nodemanager.TestDefaultContainerExecutor)
>   Time elapsed: 1.149 sec  <<< FAILURE!
> java.lang.AssertionError: Invalid Diagnostics message: Exception from 
> container-launch.
> Container id: CONTAINER_ID
> Exit code: 127
> Exception message: bash: 
> target/TestDefaultContainerExecutor/localDir/default_container_executor.sh: 
> そのようなファイルやディレクトリはありません
> Stack trace: ExitCodeException exitCode=127: bash: 
> target/TestDefaultContainerExecutor/localDir/default_container_executor.sh: 
> そのようなファイルやディ>レクトリはありません
> {code}
> This is because the test code assertion assumes the English locale as below.
> {code}
> 250   public Object answer(InvocationOnMock invocationOnMock)
> 251   throws Throwable {
> 252 String diagnostics = (String) 
> invocationOnMock.getArguments()[0];
> 253 assertTrue("Invalid Diagnostics message: " + diagnostics,
> 254 diagnostics.contains("No such file or directory"));
> 255 return null;
> 256   }
> {code}
> This exists on trunk, too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4168) Test TestLogAggregationService.testLocalFileDeletionOnDiskFull failing

2016-01-07 Thread Takashi Ohnishi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15087538#comment-15087538
 ] 

Takashi Ohnishi commented on YARN-4168:
---

I encountered, too.

This seems to be caused by early assertion before the actual file deletion 
completed as noted in YARN-1978.

How about retrying to check the existence?
I will attach a patch.

> Test TestLogAggregationService.testLocalFileDeletionOnDiskFull failing
> --
>
> Key: YARN-4168
> URL: https://issues.apache.org/jira/browse/YARN-4168
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: test
>Affects Versions: 3.0.0
> Environment: Jenkins
>Reporter: Steve Loughran
>Priority: Critical
>
> {{TestLogAggregationService.testLocalFileDeletionOnDiskFull}} failing on 
> [Jenkins build 
> 1136|https://builds.apache.org/view/H-L/view/Hadoop/job/Hadoop-Yarn-trunk/1136/testReport/junit/org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation/TestLogAggregationService/testLocalFileDeletionOnDiskFull/]
> {code}
> {noformat}
> java.lang.AssertionError: null
>   at org.junit.Assert.fail(Assert.java:86)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertFalse(Assert.java:64)
>   at org.junit.Assert.assertFalse(Assert.java:74)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.TestLogAggregationService.verifyLocalFileDeletion(TestLogAggregationService.java:229)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.TestLogAggregationService.testLocalFileDeletionOnDiskFull(TestLogAggregationService.java:285)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4549) Containers stuck in KILLING state

2016-01-07 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15087548#comment-15087548
 ] 

Jason Lowe commented on YARN-4549:
--

If this only happens to long-running containers and the pid files are missing 
for even RUNNING containers that have been up a while then I'm thinking 
something is coming along at some point and blowing away the pid files because 
they're too old.  Is there a tmp cleaner like tmpwatch or some other periodic 
maintenance process that could be cleaning up these "old" files?  A while back 
someone reported NM recovery issues because they were storing the NM leveldb 
state store files in /tmp and a tmp cleaner was periodically deleting some of 
the old leveldb files and corrupting the database.

You could also look in other areas under nmPrivate and see if some of the 
distributed cache directories have also been removed.  If that's the case then 
you should see messages like "Resource XXX is missing, localizing it again" in 
the NM logs as it tries to re-use a distcache entry but then discovers it's 
mysteriously missing from the local disk.  If whole directories have been 
reaped including the dist cache entries then it would strongly point to 
something like a periodic cleanup like tmpwatch or something similar.

> Containers stuck in KILLING state
> -
>
> Key: YARN-4549
> URL: https://issues.apache.org/jira/browse/YARN-4549
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.7.1
>Reporter: Danil Serdyuchenko
>
> We are running samza 0.8 on YARN 2.7.1 with {{LinuxContainerExecutor}} as the 
> container-executor with cgroups configuration. Also we have NM recovery 
> enabled.
> We observe a lot of containers that get stuck in the KIILLING state after the 
> NM tries to kill them. The container remains running indefinitely, this 
> causes some duplication as new containers are brought up to replace them. 
> Looking through the logs NM can't seem to get the container PID.
> {noformat}
> 16/01/05 05:16:44 INFO containermanager.ContainerManagerImpl: Stopping 
> container with container Id: container_1448454866800_0023_01_05
> 16/01/05 05:16:44 INFO nodemanager.NMAuditLogger: USER=ec2-user 
> IP=10.51.111.243OPERATION=Stop Container Request
> TARGET=ContainerManageImpl  RESULT=SUCCESS  
> APPID=application_1448454866800_0023
> CONTAINERID=container_1448454866800_0023_01_05
> 16/01/05 05:16:44 INFO container.ContainerImpl: Container 
> container_1448454866800_0023_01_05 transitioned from RUNNING to KILLING
> 16/01/05 05:16:44 INFO launcher.ContainerLaunch: Cleaning up container 
> container_1448454866800_0023_01_05
> 16/01/05 05:16:47 INFO launcher.ContainerLaunch: Could not get pid for 
> container_1448454866800_0023_01_05. Waited for 2000 ms.
> {noformat}
> The PID files for containers in the KILLING state are missing, and a few 
> other container that have been in the RUNNING state for a few weeks are also 
> missing them.  We waren't able to consistently replicate this and hoping that 
> someone has come across this before.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3998) Add retry-times to let NM re-launch container when it fails to run

2016-01-07 Thread Varun Vasudev (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15087392#comment-15087392
 ] 

Varun Vasudev commented on YARN-3998:
-

Thanks for the patch [~hex108]!

In your implementation, the relaunched container will go through the 
launchContainer call which will try to setup the container launch environment 
again(like creating the local and log dirs, creating tokens, etc). Won't this 
lead to FileAlreadyExistsException being thrown as part of the launchContainer 
call? In addition, this also means that on a node with more than one local dir, 
different attempts could get allocated to different local dirs. I wonder if 
it's better to move the retry logic into the launchContainer function instead 
of adding a new state transition?

Some feedback on the code itself -
1)
Rename ContainerRetry to ContainerRetryContext

2)
{code}
{@link ContainerRetryPolicy} : NEVER_RETRY(no matter what error code is
+ * when container fails to run, just do not retry), ALWAYS_RETRY(no matter
+ * what error code is, when container fails to run, just retry),
+ * RETRY_ON_SPECIFIC_ERROR_CODE(when container fails to run, do retry if 
the
+ * error code is one of errorCodes, otherwise do not retry.
+ * Note: if error code is 137(SIGKILL) or 143(SIGTERM), it will not retry
+ * because it is usually killed on purpose.
{code}
Specify that the default policy is NEVER_RETRY

3)
Rename retryTimes to maxRetries

4)
Change the interval unit to ms instead of seconds

5)
{code}
+  // remain retries to relaunch container if needed
+  private int remainRetries;
{code}
Rename remainRetries to remainingRetryAttempts

6)
{code}
+if (launchContext != null) {
+  this.containerRetry = launchContext.getContainerRetry();
+  if (this.containerRetry != null) {
+remainRetries = containerRetry.getRetryTimes();
+  }
+} else {
+  this.containerRetry = null;
+}
{code}
Instead of setting containerRetry to null, can we initialize to a retry object 
with never_retry

7)
Rename RetryWithFailureTransition to RetryFailureTransition

8)
Change
{code}
+LOG.info("Relaunch Container " + container.getContainerId()
++ ". Remain retry times : " + container.remainRetries
++ ". Retry interval is "
++ container.containerRetry.getRetryInterval() + "s");
{code}
to 
{code}
+LOG.info("Relaunching container " + container.getContainerId()
++ ". Remaining retry attempts : " + container.remainRetries
++ ". Retry interval is "
++ container.containerRetry.getRetryInterval() + "s");
{code}

9)
Rename storeContainerRemainRetries to storeContainerRemainingRetryAttempts


> Add retry-times to let NM re-launch container when it fails to run
> --
>
> Key: YARN-3998
> URL: https://issues.apache.org/jira/browse/YARN-3998
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Jun Gong
>Assignee: Jun Gong
> Attachments: YARN-3998.01.patch, YARN-3998.02.patch
>
>
> I'd like to add a field(retry-times) in ContainerLaunchContext. When AM 
> launches containers, it could specify the value. Then NM will re-launch the 
> container 'retry-times' times when it fails to run(e.g.exit code is not 0). 
> It will save a lot of time. It avoids container localization. RM does not 
> need to re-schedule the container. And local files in container's working 
> directory will be left for re-use.(If container have downloaded some big 
> files, it does not need to re-download them when running again.) 
> We find it is useful in systems like Storm.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-3940) Application moveToQueue should check NodeLabel permission

2016-01-07 Thread Bibin A Chundatt (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3940?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bibin A Chundatt updated YARN-3940:
---
Attachment: 0005-YARN-3940.patch

Thank you for looking into the patch  [~sunilg] 

I have updated patch based on comments .

Point#3  i agree that best way to do is store used labels of application . 
Currently the api is not available and resourceusage was next choice .For move 
queue the change is it required ?. Will wait for comments from [~leftnoteasy] 
too.

All other comments i have updated Attaching patch for the review.

> Application moveToQueue should check NodeLabel permission 
> --
>
> Key: YARN-3940
> URL: https://issues.apache.org/jira/browse/YARN-3940
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
> Attachments: 0001-YARN-3940.patch, 0002-YARN-3940.patch, 
> 0003-YARN-3940.patch, 0004-YARN-3940.patch, 0005-YARN-3940.patch
>
>
> Configure capacity scheduler 
> Configure node label an submit application {{queue=A Label=X}}
> Move application to queue {{B}} and x is not having access
> {code}
> 2015-07-20 19:46:19,626 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler:
>  Application attempt appattempt_1437385548409_0005_01 released container 
> container_e08_1437385548409_0005_01_02 on node: host: 
> host-10-19-92-117:64318 #containers=1 available= 
> used= with event: KILL
> 2015-07-20 19:46:20,970 WARN 
> org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: 
> Invalid resource ask by application appattempt_1437385548409_0005_01
> org.apache.hadoop.yarn.exceptions.InvalidResourceRequestException: Invalid 
> resource request, queue=b1 doesn't have permission to access all labels in 
> resource request. labelExpression of resource request=x. Queue labels=y
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.validateResourceRequest(SchedulerUtils.java:304)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.normalizeAndValidateRequest(SchedulerUtils.java:234)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.normalizeAndvalidateRequest(SchedulerUtils.java:250)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.RMServerUtils.normalizeAndValidateRequests(RMServerUtils.java:106)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService.allocate(ApplicationMasterService.java:515)
> at 
> org.apache.hadoop.yarn.api.impl.pb.service.ApplicationMasterProtocolPBServiceImpl.allocate(ApplicationMasterProtocolPBServiceImpl.java:60)
> at 
> org.apache.hadoop.yarn.proto.ApplicationMasterProtocol$ApplicationMasterProtocolService$2.callBlockingMethod(ApplicationMasterProtocol.java:99)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:636)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:976)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2174)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2170)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1666)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2168)
> {code}
> Same exception will be thrown till *heartbeat timeout*
> Then application state will be updated to *FAILED*



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3102) Decommisioned Nodes not listed in Web UI

2016-01-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15088614#comment-15088614
 ] 

Hadoop QA commented on YARN-3102:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
21s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 26s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 29s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 36s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 9s 
{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 21s 
{color} | {color:red} hadoop-yarn-server-resourcemanager in trunk failed with 
JDK v1.8.0_66. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 26s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
31s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 23s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 23s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 28s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 28s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 13s 
{color} | {color:red} Patch generated 2 new checkstyle issues in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 (total was 66, now 68). {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 33s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 18s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 introduced 1 new FindBugs issues. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 18s 
{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed 
with JDK v1.8.0_66. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 24s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 59m 31s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK 
v1.8.0_66. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 60m 40s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK 
v1.7.0_91. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
17s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 137m 5s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | 
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
|  |  Return value of putIfAbsent is ignored, but rmNode is reused in 
org.apache.hadoop.yarn.server.resourcemanager.NodesListManager.setDecomissionedNMs()
  At NodesListManager.java:ignored, but rmNode is reused in 

[jira] [Commented] (YARN-4560) Make scheduler error checking message more user friendly

2016-01-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15088611#comment-15088611
 ] 

Hadoop QA commented on YARN-4560:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
24s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 25s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 30s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 36s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
12s {color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 21s 
{color} | {color:red} hadoop-yarn-server-resourcemanager in trunk failed with 
JDK v1.8.0_66. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 26s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
30s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 23s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 23s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 27s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 27s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 34s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
16s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 18s 
{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed 
with JDK v1.8.0_66. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 24s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 59m 29s 
{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed 
with JDK v1.8.0_66. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 60m 54s 
{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed 
with JDK v1.7.0_91. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
17s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 137m 24s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12781087/YARN-4560.001.patch |
| JIRA Issue | YARN-4560 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 11a0aceef44b 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| 

[jira] [Commented] (YARN-4559) Make leader elector and zk store share the same curator client

2016-01-07 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15088616#comment-15088616
 ] 

Jian He commented on YARN-4559:
---

bq. My preference would be for doing it in the RM
yep, I can move the close into RM#ServiceStop
bq. The ZK-store specific configs 
That's my original intention too, but the problem is that the curator client 
creation depends on the RM username:password. 

> Make leader elector and zk store share the same curator client
> --
>
> Key: YARN-4559
> URL: https://issues.apache.org/jira/browse/YARN-4559
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jian He
>Assignee: Jian He
> Attachments: YARN-4559.1.patch
>
>
> After YARN-4438, we can reuse the same curator client for leader elector and 
> zk store



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-4562) YARN WebApp ignores the configuration passed to it for keystore settings

2016-01-07 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4562?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated YARN-4562:
---
Attachment: YARN-4562.patch

Trivial patch. [~vinodkv] [~sseth] can you take a look?

> YARN WebApp ignores the configuration passed to it for keystore settings
> 
>
> Key: YARN-4562
> URL: https://issues.apache.org/jira/browse/YARN-4562
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
> Attachments: YARN-4562.patch
>
>
> The conf can be passed to WebApps builder, however the following code in 
> WebApps.java that builds the HttpServer2 object:
> {noformat}
> if (httpScheme.equals(WebAppUtils.HTTPS_PREFIX)) {
>   WebAppUtils.loadSslConfiguration(builder);
> }
> {noformat}
> ...results in loadSslConfiguration creating a new Configuration object; the 
> one that is passed in is ignored, as far as the keystore/etc. settings are 
> concerned.  loadSslConfiguration has another overload with Configuration 
> parameter that should be used instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (YARN-4562) YARN WebApp ignores the configuration passed to it for keystore settings

2016-01-07 Thread Sergey Shelukhin (JIRA)
Sergey Shelukhin created YARN-4562:
--

 Summary: YARN WebApp ignores the configuration passed to it for 
keystore settings
 Key: YARN-4562
 URL: https://issues.apache.org/jira/browse/YARN-4562
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Sergey Shelukhin


The conf can be passed to WebApps builder, however the following code in 
WebApps.java that builds the HttpServer2 object:
{noformat}
if (httpScheme.equals(WebAppUtils.HTTPS_PREFIX)) {
  WebAppUtils.loadSslConfiguration(builder);
}
{noformat}
...results in loadSslConfiguration creating a new Configuration object; the one 
that is passed in is ignored, as far as the keystore/etc. settings are 
concerned.  loadSslConfiguration has another overload with Configuration 
parameter that should be used instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4562) YARN WebApp ignores the configuration passed to it for keystore settings

2016-01-07 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15088627#comment-15088627
 ] 

Sergey Shelukhin commented on YARN-4562:


Or [~hitesh]. I dunno who owns WebApp :)

> YARN WebApp ignores the configuration passed to it for keystore settings
> 
>
> Key: YARN-4562
> URL: https://issues.apache.org/jira/browse/YARN-4562
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
> Attachments: YARN-4562.patch
>
>
> The conf can be passed to WebApps builder, however the following code in 
> WebApps.java that builds the HttpServer2 object:
> {noformat}
> if (httpScheme.equals(WebAppUtils.HTTPS_PREFIX)) {
>   WebAppUtils.loadSslConfiguration(builder);
> }
> {noformat}
> ...results in loadSslConfiguration creating a new Configuration object; the 
> one that is passed in is ignored, as far as the keystore/etc. settings are 
> concerned.  loadSslConfiguration has another overload with Configuration 
> parameter that should be used instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4553) Add cgroups support for docker containers

2016-01-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15088594#comment-15088594
 ] 

Hadoop QA commented on YARN-4553:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
25s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 32s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 32s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
12s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 34s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 8s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 22s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 26s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
29s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 25s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 25s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 26s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 26s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 27s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 9s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 19s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 21s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 55s 
{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed with 
JDK v1.8.0_66. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 9m 29s 
{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed with 
JDK v1.7.0_91. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
21s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 36m 23s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12781101/YARN-4553.003.patch |
| JIRA Issue | YARN-4553 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 48d04cff4779 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 89022f8 |
| Default Java | 1.7.0_91 |
| Multi-JDK versions |  

[jira] [Commented] (YARN-3995) Some of the NM events are not getting published due race condition when AM container finishes in NM

2016-01-07 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15088636#comment-15088636
 ] 

Naganarasimha G R commented on YARN-3995:
-

Oops my mistake... assumed the interface wrongly !. its similar to the timer 
service where in we can say when the task to be executed, got it will correct 
it !

> Some of the NM events are not getting published due race condition when AM 
> container finishes in NM 
> 
>
> Key: YARN-3995
> URL: https://issues.apache.org/jira/browse/YARN-3995
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, timelineserver
>Affects Versions: YARN-2928
>Reporter: Naganarasimha G R
>Assignee: Naganarasimha G R
>  Labels: yarn-2928-1st-milestone
> Attachments: YARN-3995-feature-YARN-2928.v1.001.patch
>
>
> As discussed in YARN-3045:  While testing in TestDistributedShell found out 
> that few of the container metrics events were failing as there will be race 
> condition. When the AM container finishes and removes the collector for the 
> app, still there is possibility that all the events published for the app by 
> the current NM and other NM are still in pipeline, 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4238) createdTime and modifiedTime is not reported while publishing entities to ATSv2

2016-01-07 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15087676#comment-15087676
 ] 

Naganarasimha G R commented on YARN-4238:
-

In that case my opinion would be to remove the modification time and add it 
when we get more strong use case, thoughts?

> createdTime and modifiedTime is not reported while publishing entities to 
> ATSv2
> ---
>
> Key: YARN-4238
> URL: https://issues.apache.org/jira/browse/YARN-4238
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Varun Saxena
>Assignee: Varun Saxena
>  Labels: yarn-2928-1st-milestone
> Attachments: YARN-4238-YARN-2928.01.patch, 
> YARN-4238-feature-YARN-2928.002.patch, YARN-4238-feature-YARN-2928.003.patch, 
> YARN-4238-feature-YARN-2928.02.patch
>
>
> While publishing entities from RM and elsewhere we are not sending created 
> time. For instance, created time in TimelineServiceV2Publisher class and for 
> other entities in other such similar classes is not updated. We can easily 
> update created time when sending application created event. Likewise for 
> modification time on every write.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3940) Application moveToQueue should check NodeLabel permission

2016-01-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15087686#comment-15087686
 ] 

Hadoop QA commented on YARN-3940:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
29s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 26s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 30s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 36s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
10s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 22s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
31s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 23s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 23s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 28s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 28s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 14s 
{color} | {color:red} Patch generated 2 new checkstyle issues in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 (total was 113, now 114). {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 35s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 19s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 introduced 1 new FindBugs issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 25s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 63m 11s 
{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed 
with JDK v1.8.0_66. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 64m 28s 
{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed 
with JDK v1.7.0_91. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
17s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 144m 55s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | 
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
|  |  java.util.Set is incompatible with expected argument 
type String in 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.checkQueuePartition(ApplicationId,
 LeafQueue)  At CapacityScheduler.java:argument type String in 

[jira] [Commented] (YARN-3995) Some of the NM events are not getting published due race condition when AM container finishes in NM

2016-01-07 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15088549#comment-15088549
 ] 

Sangjin Lee commented on YARN-3995:
---

bq. Yes i wanted to address it as i was trying to point out earlier Instead of 
spawning multiple threads may be we can have single thread which does this 
activity

Oops, sorry. I didn't see you already mentioned this.

{quote}
IIUC the approach you mentioned in the callable we will be sleeping for the 
configured period for a application and then remove it. but if multiple apps at 
the same time finish then initial apps only wait for configured period but 
subsequent apps wait for lil more time than the earlier ones.(app's wait period 
+ other apps wait period in the queue ) thoughts?
{quote}

ScheduledExecutorService is much more straightforward than that. We can simply 
take advantage of the scheduling feature. The Runnable (or Callable, doesn't 
matter) can simply execute removeApplication():

{code}
ScheduledExecutorService scheduler = 
Executors.newSingleThreadScheduledExecutor();
...
public void stopContainer(ContainerTerminationContext context) {
  ...
  scheduler.schedule(new Runnable() {
public void run() {
  removeApplicationId(appId);
}
  }, collectorLingerPeriod, TimeUnit.MILLISECONDS);
}
{code}

It doesn't do this by actually putting the executor service thread to sleep for 
that period, thus there is no worry about delays propagating to the next work 
item. The delay management is all done using the internal queue that 
understands the delays.

> Some of the NM events are not getting published due race condition when AM 
> container finishes in NM 
> 
>
> Key: YARN-3995
> URL: https://issues.apache.org/jira/browse/YARN-3995
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, timelineserver
>Affects Versions: YARN-2928
>Reporter: Naganarasimha G R
>Assignee: Naganarasimha G R
>  Labels: yarn-2928-1st-milestone
> Attachments: YARN-3995-feature-YARN-2928.v1.001.patch
>
>
> As discussed in YARN-3045:  While testing in TestDistributedShell found out 
> that few of the container metrics events were failing as there will be race 
> condition. When the AM container finishes and removes the collector for the 
> app, still there is possibility that all the events published for the app by 
> the current NM and other NM are still in pipeline, 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4556) TestFifoScheduler.testResourceOverCommit fails

2016-01-07 Thread Akihiro Suda (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15088728#comment-15088728
 ] 

Akihiro Suda commented on YARN-4556:


{{AdminService.updateNodeResource()}} is asynchronous; it just enqueues 
{{RMNodeResourceUpdateEvent}} via {{AsyncDispatcher.handle()}}.
https://github.com/apache/hadoop/blob/89022f8d4bac0e9d0b848fd91e9c4d700fe1cdbe/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/AdminService.java#L610-L611

So we need to add some delays and retries to the test.


>  TestFifoScheduler.testResourceOverCommit fails
> ---
>
> Key: YARN-4556
> URL: https://issues.apache.org/jira/browse/YARN-4556
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler
>Reporter: Akihiro Suda
>
> From YARN-4548 Jenkins log: 
> https://builds.apache.org/job/PreCommit-YARN-Build/10181/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt
> {code}
> Running 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fifo.TestFifoScheduler
> Tests run: 16, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 31.004 sec 
> <<< FAILURE! - in 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fifo.TestFifoScheduler
> testResourceOverCommit(org.apache.hadoop.yarn.server.resourcemanager.scheduler.fifo.TestFifoScheduler)
>   Time elapsed: 4.746 sec  <<< FAILURE!
> java.lang.AssertionError: expected:<-2048> but was:<0>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fifo.TestFifoScheduler.testResourceOverCommit(TestFifoScheduler.java:1142)
> {code}
> https://github.com/apache/hadoop/blob/8676a118a12165ae5a8b80a2a4596c133471ebc1/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fifo/TestFifoScheduler.java#L1142
> It seems that Jenkins has been hitting this intermittently since April 2015
> https://www.google.com/search?q=TestFifoScheduler.testResourceOverCommit



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-4556) TestFifoScheduler.testResourceOverCommit fails

2016-01-07 Thread Akihiro Suda (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akihiro Suda updated YARN-4556:
---
Attachment: YARN-4556-1.patch

Attached YARN-4556-1.patch


>  TestFifoScheduler.testResourceOverCommit fails
> ---
>
> Key: YARN-4556
> URL: https://issues.apache.org/jira/browse/YARN-4556
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler
>Reporter: Akihiro Suda
> Attachments: YARN-4556-1.patch
>
>
> From YARN-4548 Jenkins log: 
> https://builds.apache.org/job/PreCommit-YARN-Build/10181/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_66.txt
> {code}
> Running 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fifo.TestFifoScheduler
> Tests run: 16, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 31.004 sec 
> <<< FAILURE! - in 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fifo.TestFifoScheduler
> testResourceOverCommit(org.apache.hadoop.yarn.server.resourcemanager.scheduler.fifo.TestFifoScheduler)
>   Time elapsed: 4.746 sec  <<< FAILURE!
> java.lang.AssertionError: expected:<-2048> but was:<0>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fifo.TestFifoScheduler.testResourceOverCommit(TestFifoScheduler.java:1142)
> {code}
> https://github.com/apache/hadoop/blob/8676a118a12165ae5a8b80a2a4596c133471ebc1/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fifo/TestFifoScheduler.java#L1142
> It seems that Jenkins has been hitting this intermittently since April 2015
> https://www.google.com/search?q=TestFifoScheduler.testResourceOverCommit



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4414) Nodemanager connection errors are retried at multiple levels

2016-01-07 Thread Xianyin Xin (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15088804#comment-15088804
 ] 

Xianyin Xin commented on YARN-4414:
---

Hi [~lichangleo], need we also revisit the two layer retries in {{RMProxy}}? 
IIUC, the proxy layer will retry upto 15 min with a retry interval 30 sec, but 
at the background, the RM proxy will calculate a max retry times by the two 
values. The time consuming of IPC layer retry is more than 1 sec, and by 
default retry 10 times, the result of which is the actual total wait time is 15 
min + 15 / 0.5 * 10 * (more than 1 sec), which is much more than 15 min.

> Nodemanager connection errors are retried at multiple levels
> 
>
> Key: YARN-4414
> URL: https://issues.apache.org/jira/browse/YARN-4414
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.7.1, 2.6.2
>Reporter: Jason Lowe
>Assignee: Chang Li
> Attachments: YARN-4414.1.2.patch, YARN-4414.1.2.patch, 
> YARN-4414.1.3.patch, YARN-4414.1.patch, YARN-4414.2.patch
>
>
> This is related to YARN-3238.  Ran into more scenarios where connection 
> errors are being retried at multiple levels, like NoRouteToHostException.  
> The fix for YARN-3238 was too specific, and I think we need a more general 
> solution to catch a wider array of connection errors that can occur to avoid 
> retrying them both at the RPC layer and at the NM proxy layer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-4411) ResourceManager IllegalArgumentException error

2016-01-07 Thread yarntime (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

yarntime updated YARN-4411:
---
Attachment: (was: YARN-4411.001.patch)

> ResourceManager IllegalArgumentException error
> --
>
> Key: YARN-4411
> URL: https://issues.apache.org/jira/browse/YARN-4411
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.7.1
>Reporter: yarntime
>Assignee: yarntime
>
> in version 2.7.1, line 1914  may cause IllegalArgumentException in 
> RMAppAttemptImpl:
>   YarnApplicationAttemptState.valueOf(this.getState().toString())
> cause by this.getState() returns type RMAppAttemptState which may not be 
> converted to YarnApplicationAttemptState.
> {noformat}
> java.lang.IllegalArgumentException: No enum constant 
> org.apache.hadoop.yarn.api.records.YarnApplicationAttemptState.LAUNCHED_UNMANAGED_SAVING
> at java.lang.Enum.valueOf(Enum.java:236)
> at 
> org.apache.hadoop.yarn.api.records.YarnApplicationAttemptState.valueOf(YarnApplicationAttemptState.java:27)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl.createApplicationAttemptReport(RMAppAttemptImpl.java:1870)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.getApplicationAttemptReport(ClientRMService.java:355)
> at 
> org.apache.hadoop.yarn.api.impl.pb.service.ApplicationClientProtocolPBServiceImpl.getApplicationAttemptReport(ApplicationClientProtocolPBServiceImpl.java:355)
> at 
> org.apache.hadoop.yarn.proto.ApplicationClientProtocol$ApplicationClientProtocolService$2.callBlockingMethod(ApplicationClientProtocol.java:425)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3940) Application moveToQueue should check NodeLabel permission

2016-01-07 Thread Bibin A Chundatt (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15088779#comment-15088779
 ] 

Bibin A Chundatt commented on YARN-3940:


Hi [~Naganarasimha]
Thank you for review .I have already taken care of that too 
{noformat}
   || targetqueuelabels.contains(RMNodeLabelsManager.ANY)) 
{noformat}

> Application moveToQueue should check NodeLabel permission 
> --
>
> Key: YARN-3940
> URL: https://issues.apache.org/jira/browse/YARN-3940
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
> Attachments: 0001-YARN-3940.patch, 0002-YARN-3940.patch, 
> 0003-YARN-3940.patch, 0004-YARN-3940.patch, 0005-YARN-3940.patch
>
>
> Configure capacity scheduler 
> Configure node label an submit application {{queue=A Label=X}}
> Move application to queue {{B}} and x is not having access
> {code}
> 2015-07-20 19:46:19,626 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler:
>  Application attempt appattempt_1437385548409_0005_01 released container 
> container_e08_1437385548409_0005_01_02 on node: host: 
> host-10-19-92-117:64318 #containers=1 available= 
> used= with event: KILL
> 2015-07-20 19:46:20,970 WARN 
> org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: 
> Invalid resource ask by application appattempt_1437385548409_0005_01
> org.apache.hadoop.yarn.exceptions.InvalidResourceRequestException: Invalid 
> resource request, queue=b1 doesn't have permission to access all labels in 
> resource request. labelExpression of resource request=x. Queue labels=y
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.validateResourceRequest(SchedulerUtils.java:304)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.normalizeAndValidateRequest(SchedulerUtils.java:234)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.normalizeAndvalidateRequest(SchedulerUtils.java:250)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.RMServerUtils.normalizeAndValidateRequests(RMServerUtils.java:106)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService.allocate(ApplicationMasterService.java:515)
> at 
> org.apache.hadoop.yarn.api.impl.pb.service.ApplicationMasterProtocolPBServiceImpl.allocate(ApplicationMasterProtocolPBServiceImpl.java:60)
> at 
> org.apache.hadoop.yarn.proto.ApplicationMasterProtocol$ApplicationMasterProtocolService$2.callBlockingMethod(ApplicationMasterProtocol.java:99)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:636)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:976)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2174)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2170)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1666)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2168)
> {code}
> Same exception will be thrown till *heartbeat timeout*
> Then application state will be updated to *FAILED*



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-2402) NM restart: Container recovery for Windows

2016-01-07 Thread Yuqi Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2402?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuqi Wang updated YARN-2402:

Attachment: YARN-2402-v1.patch

Attaching a patch for adding exit code script for Windows which is used to get 
exit code from recovered container.

> NM restart: Container recovery for Windows
> --
>
> Key: YARN-2402
> URL: https://issues.apache.org/jira/browse/YARN-2402
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Affects Versions: 2.6.0
>Reporter: Jason Lowe
> Attachments: YARN-2402-v1.patch
>
>
> We should add container recovery for NM restart on Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-4411) ResourceManager IllegalArgumentException error

2016-01-07 Thread yarntime (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

yarntime updated YARN-4411:
---
Attachment: YARN-4411.001.patch

patch

> ResourceManager IllegalArgumentException error
> --
>
> Key: YARN-4411
> URL: https://issues.apache.org/jira/browse/YARN-4411
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.7.1
>Reporter: yarntime
>Assignee: yarntime
> Attachments: YARN-4411.001.patch
>
>
> in version 2.7.1, line 1914  may cause IllegalArgumentException in 
> RMAppAttemptImpl:
>   YarnApplicationAttemptState.valueOf(this.getState().toString())
> cause by this.getState() returns type RMAppAttemptState which may not be 
> converted to YarnApplicationAttemptState.
> {noformat}
> java.lang.IllegalArgumentException: No enum constant 
> org.apache.hadoop.yarn.api.records.YarnApplicationAttemptState.LAUNCHED_UNMANAGED_SAVING
> at java.lang.Enum.valueOf(Enum.java:236)
> at 
> org.apache.hadoop.yarn.api.records.YarnApplicationAttemptState.valueOf(YarnApplicationAttemptState.java:27)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl.createApplicationAttemptReport(RMAppAttemptImpl.java:1870)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.getApplicationAttemptReport(ClientRMService.java:355)
> at 
> org.apache.hadoop.yarn.api.impl.pb.service.ApplicationClientProtocolPBServiceImpl.getApplicationAttemptReport(ApplicationClientProtocolPBServiceImpl.java:355)
> at 
> org.apache.hadoop.yarn.proto.ApplicationClientProtocol$ApplicationClientProtocolService$2.callBlockingMethod(ApplicationClientProtocol.java:425)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-2402) NM restart: Container recovery for Windows

2016-01-07 Thread Yuqi Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2402?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuqi Wang updated YARN-2402:

Affects Version/s: (was: 2.5.0)
   2.6.0

> NM restart: Container recovery for Windows
> --
>
> Key: YARN-2402
> URL: https://issues.apache.org/jira/browse/YARN-2402
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Affects Versions: 2.6.0
>Reporter: Jason Lowe
>
> We should add container recovery for NM restart on Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4335) Allow ResourceRequests to specify ExecutionType of a request ask

2016-01-07 Thread Konstantinos Karanasos (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15088007#comment-15088007
 ] 

Konstantinos Karanasos commented on YARN-4335:
--

[~leftnoteasy], thanks again for the feedback.
I addressed your latest comments in my last patch -- please give it a look and 
let me know if it looks OK, so we can go ahead and push it to the branch.

> Allow ResourceRequests to specify ExecutionType of a request ask
> 
>
> Key: YARN-4335
> URL: https://issues.apache.org/jira/browse/YARN-4335
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
> Attachments: YARN-4335-yarn-2877.001.patch, YARN-4335.002.patch, 
> YARN-4335.003.patch
>
>
> YARN-2882 introduced container types that are internal (not user-facing) and 
> are used by the ContainerManager during execution at the NM.
> With this JIRA we are introducing (user-facing) resource request types that 
> are used by the AM to specify the type of the ResourceRequest.
> We will initially support two resource request types: CONSERVATIVE and 
> OPTIMISTIC.
> CONSERVATIVE resource requests will be handed internally to containers of 
> GUARANTEED type, whereas OPTIMISTIC resource requests will be handed to 
> QUEUEABLE containers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4414) Nodemanager connection errors are retried at multiple levels

2016-01-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15088047#comment-15088047
 ] 

Hadoop QA commented on YARN-4414:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
17s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 8s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 17s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
30s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 3s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
27s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
24s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 57s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 58s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
55s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 6s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 6s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 20s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 20s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
30s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 59s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
22s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
35s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 48s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 0s 
{color} | {color:green} hadoop-yarn-common in the patch passed with JDK 
v1.8.0_66. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 46s 
{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed with 
JDK v1.8.0_66. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 14s 
{color} | {color:green} hadoop-yarn-common in the patch passed with JDK 
v1.7.0_91. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 9m 10s 
{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed with 
JDK v1.7.0_91. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
20s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 54m 23s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12781032/YARN-4414.2.patch |
| JIRA Issue | YARN-4414 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |

[jira] [Commented] (YARN-4519) potential deadlock of CapacityScheduler between decrease container and assign containers

2016-01-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15088054#comment-15088054
 ] 

Hadoop QA commented on YARN-4519:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
24s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 26s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 31s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 36s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
10s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 21s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
30s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 23s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 23s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 27s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 27s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 15s 
{color} | {color:red} Patch generated 1 new checkstyle issues in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 (total was 145, now 144). {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 34s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
16s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 23s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 59m 39s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK 
v1.8.0_66. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 60m 16s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK 
v1.7.0_91. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
16s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 136m 58s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_66 Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacityScheduler |
| JDK v1.7.0_91 Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacityScheduler |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12781009/YARN-4519.1.patch |
| JIRA Issue | YARN-4519 |
| Optional Tests |  asflicense  compile  javac  javadoc  

[jira] [Commented] (YARN-3223) Resource update during NM graceful decommission

2016-01-07 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15088064#comment-15088064
 ] 

Junping Du commented on YARN-3223:
--

[~brookz], any feedback on my proposal above?

> Resource update during NM graceful decommission
> ---
>
> Key: YARN-3223
> URL: https://issues.apache.org/jira/browse/YARN-3223
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: graceful, nodemanager, resourcemanager
>Affects Versions: 2.7.1
>Reporter: Junping Du
>Assignee: Brook Zhou
> Attachments: YARN-3223-v0.patch, YARN-3223-v1.patch, 
> YARN-3223-v2.patch, YARN-3223-v3.patch
>
>
> During NM graceful decommission, we should handle resource update properly, 
> include: make RMNode keep track of old resource for possible rollback, keep 
> available resource to 0 and used resource get updated when
> container finished.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (YARN-2888) Corrective mechanisms for rebalancing NM container queues

2016-01-07 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2888?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh reassigned YARN-2888:
-

Assignee: Arun Suresh

> Corrective mechanisms for rebalancing NM container queues
> -
>
> Key: YARN-2888
> URL: https://issues.apache.org/jira/browse/YARN-2888
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Konstantinos Karanasos
>Assignee: Arun Suresh
>
> Bad queuing decisions by the LocalRMs (e.g., due to the distributed nature of 
> the scheduling decisions or due to having a stale image of the system) may 
> lead to an imbalance in the waiting times of the NM container queues. This 
> can in turn have an impact in job execution times and cluster utilization.
> To this end, we introduce corrective mechanisms that may remove (whenever 
> needed) container requests from overloaded queues, adding them to less-loaded 
> ones.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-4544) All the log messages about rolling monitoring interval are shown with WARN level

2016-01-07 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4544?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated YARN-4544:

Summary: All the log messages about rolling monitoring interval are shown 
with WARN level  (was: All the log messages about rolling monitoring interval 
are shown with the WARN level)

> All the log messages about rolling monitoring interval are shown with WARN 
> level
> 
>
> Key: YARN-4544
> URL: https://issues.apache.org/jira/browse/YARN-4544
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: log-aggregation, nodemanager
>Affects Versions: 2.7.1
>Reporter: Takashi Ohnishi
>Assignee: Takashi Ohnishi
>Priority: Minor
> Attachments: YARN-4544.2.patch, YARN-4544.patch
>
>
> About yarn.nodemanager.log-aggregation.roll-monitoring-interval-seconds, 
> there are three log messages corresponding to the value set to this 
> parameter, but all of them are shown with the WARN level.
> (a) disabled (default)
> {code}
> 2016-01-05 22:19:29,062 WARN 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggreg 
> ation.AppLogAggregatorImpl: rollingMonitorInterval is set as -1. The log 
> rolling mornitoring interval is disabled. The logs will be aggregated after 
> this application is finished.
> {code}
> (b) enabled
> {code}
> 2016-01-06 00:41:15,808 WARN 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.AppLogAggregatorImpl:
>  rollingMonitorInterval is set as 7200. The logs will be aggregated every 
> 7200 seconds
> {code}
> (c) enabled but wrong configuration
> {code}
> 2016-01-06 00:39:50,820 WARN 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.AppLogAggregatorImpl:
>  rollingMonitorIntervall should be more than or equal to 3600 seconds. Using 
> 3600 seconds instead.
> {code}
> I think it is better to output with WARN only in case (c), but it is ok to 
> output with INFO in case (a) and (b).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4544) All the log messages about rolling monitoring interval are shown with the WARN level

2016-01-07 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15087624#comment-15087624
 ] 

Akira AJISAKA commented on YARN-4544:
-

Committing this.

> All the log messages about rolling monitoring interval are shown with the 
> WARN level
> 
>
> Key: YARN-4544
> URL: https://issues.apache.org/jira/browse/YARN-4544
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: log-aggregation, nodemanager
>Affects Versions: 2.7.1
>Reporter: Takashi Ohnishi
>Assignee: Takashi Ohnishi
> Attachments: YARN-4544.2.patch, YARN-4544.patch
>
>
> About yarn.nodemanager.log-aggregation.roll-monitoring-interval-seconds, 
> there are three log messages corresponding to the value set to this 
> parameter, but all of them are shown with the WARN level.
> (a) disabled (default)
> {code}
> 2016-01-05 22:19:29,062 WARN 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggreg 
> ation.AppLogAggregatorImpl: rollingMonitorInterval is set as -1. The log 
> rolling mornitoring interval is disabled. The logs will be aggregated after 
> this application is finished.
> {code}
> (b) enabled
> {code}
> 2016-01-06 00:41:15,808 WARN 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.AppLogAggregatorImpl:
>  rollingMonitorInterval is set as 7200. The logs will be aggregated every 
> 7200 seconds
> {code}
> (c) enabled but wrong configuration
> {code}
> 2016-01-06 00:39:50,820 WARN 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.AppLogAggregatorImpl:
>  rollingMonitorIntervall should be more than or equal to 3600 seconds. Using 
> 3600 seconds instead.
> {code}
> I think it is better to output with WARN only in case (c), but it is ok to 
> output with INFO in case (a) and (b).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-4519) potential deadlock of CapacityScheduler between decrease container and assign containers

2016-01-07 Thread MENG DING (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4519?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

MENG DING updated YARN-4519:

Attachment: YARN-4519.1.patch

Attaching the latest patch that addresses this issue:

bq. We need to make sure following operations are under same CS synchronization 
lock:
1. Compute delta resource for increase request and insert to application
2. Compute delta resource for decrease request and call CS.decreaseContainer
3. Rollback action

1 and 2 are addressed in this patch. 3 will be addressed in YARN-4138.

> potential deadlock of CapacityScheduler between decrease container and assign 
> containers
> 
>
> Key: YARN-4519
> URL: https://issues.apache.org/jira/browse/YARN-4519
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacityscheduler
>Reporter: sandflee
>Assignee: MENG DING
> Attachments: YARN-4519.1.patch
>
>
> In CapacityScheduler.allocate() , first get FiCaSchedulerApp sync lock, and 
> may be get CapacityScheduler's sync lock in decreaseContainer()
> In scheduler thread,  first get CapacityScheduler's sync lock in 
> allocateContainersToNode(), and may get FiCaSchedulerApp sync lock in 
> FicaSchedulerApp.assignContainers(). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3367) Replace starting a separate thread for post entity with event loop in TimelineClient

2016-01-07 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15087811#comment-15087811
 ] 

Sangjin Lee commented on YARN-3367:
---

[~Naganarasimha] and I spoke offline about this, but wanted to recapture some 
of them.

I generally agree with the direction and the scope of the patch. The things 
that are being considered here are related enough that they should be handled 
as a whole.

Regarding clubbing asynchronous puts together in a single REST call, I wanted 
to be clear that its purpose is to reduce the overhead of the REST call 
latency. That said, we should be careful to keep buffering to a minimum as it 
would certainly complicate the picture in terms of handling put calls that 
arrive very late (that won't happen with the current patch).

Also, we should make sure the combined put REST call doesn't result in too big 
a payload. So in addition to combining calls, we should have a limitation on 
the size of the resulting PUT call. I don't think we need to be absolutely 
accurate here. We can use a simple measure (e.g. the number of entities + 
events + metrics, or even simply number of entities) to make sure things do not 
get out of control when put calls are made rapidly. I don't think this is an 
oft-occurring situation, but it would be good to have that safety.

Regarding the implementation of the async thread and the interaction with the 
workload:
ExecutorService is thread management plus work queue management, but at minimum 
we can use the thread management portion of things. That should help eliminate 
the need for the wait-notify done for shutting down the thread, coordinating 
with the thread shutdown, etc.

Also, instead of wait-nofity for the work completion, I would encourage using 
things like CountDownLatch or Future. Those will simplify code tremendously and 
also minimize room for errors. As a rule, I would advocate using higher level 
abstractions provided by java.util.concurrent over primitives, unless the 
concurrency utilities are not able to provide the right feature (which should 
be uncommon).

Another item: it would be good to have a self-contained "work item" instead of 
using a lookup on a map. For example, timeline entities + async flag can be a 
self-contained work item. Then it would be much simpler to deal with.

> Replace starting a separate thread for post entity with event loop in 
> TimelineClient
> 
>
> Key: YARN-3367
> URL: https://issues.apache.org/jira/browse/YARN-3367
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Junping Du
>Assignee: Naganarasimha G R
>  Labels: yarn-2928-1st-milestone
> Attachments: YARN-3367-feature-YARN-2928.003.patch, 
> YARN-3367-feature-YARN-2928.v1.002.patch, 
> YARN-3367-feature-YARN-2928.v1.004.patch, YARN-3367.YARN-2928.001.patch
>
>
> Since YARN-3039, we add loop in TimelineClient to wait for 
> collectorServiceAddress ready before posting any entity. In consumer of  
> TimelineClient (like AM), we are starting a new thread for each call to get 
> rid of potential deadlock in main thread. This way has at least 3 major 
> defects:
> 1. The consumer need some additional code to wrap a thread before calling 
> putEntities() in TimelineClient.
> 2. It cost many thread resources which is unnecessary.
> 3. The sequence of events could be out of order because each posting 
> operation thread get out of waiting loop randomly.
> We should have something like event loop in TimelineClient side, 
> putEntities() only put related entities into a queue of entities and a 
> separated thread handle to deliver entities in queue to collector via REST 
> call.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-4438) Implement RM leader election with curator

2016-01-07 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4438?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-4438:
--
Attachment: YARN-4438.6.patch

thanks for the reviewing the patch !
attached a new patch

> Implement RM leader election with curator
> -
>
> Key: YARN-4438
> URL: https://issues.apache.org/jira/browse/YARN-4438
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Jian He
>Assignee: Jian He
> Attachments: YARN-4438.1.patch, YARN-4438.2.patch, YARN-4438.3.patch, 
> YARN-4438.4.patch, YARN-4438.5.patch, YARN-4438.6.patch
>
>
> This is to implement the leader election with curator instead of the 
> ActiveStandbyElector from common package,  this also avoids adding more 
> configs in common to suit RM's own needs. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-4414) Nodemanager connection errors are retried at multiple levels

2016-01-07 Thread Chang Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4414?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chang Li updated YARN-4414:
---
Attachment: YARN-4414.2.patch

Thanks [~jlowe] for review!
updated .2 patch to remove getNMProxy2 and implemented getProxy() in term of 
getProxy(Configuration).
I set NM address to some dummy value 1234 so that it will trigger connection 
error and rpc level retires.
{{BaseContainerManagerTest}} set it to {code}"0.0.0.0:" + 
ServerSocketUtil.getPort(49162, 10); {code} a normal address thus rpc retry 
could not be triggered

> Nodemanager connection errors are retried at multiple levels
> 
>
> Key: YARN-4414
> URL: https://issues.apache.org/jira/browse/YARN-4414
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.7.1, 2.6.2
>Reporter: Jason Lowe
>Assignee: Chang Li
> Attachments: YARN-4414.1.2.patch, YARN-4414.1.2.patch, 
> YARN-4414.1.3.patch, YARN-4414.1.patch, YARN-4414.2.patch
>
>
> This is related to YARN-3238.  Ran into more scenarios where connection 
> errors are being retried at multiple levels, like NoRouteToHostException.  
> The fix for YARN-3238 was too specific, and I think we need a more general 
> solution to catch a wider array of connection errors that can occur to avoid 
> retrying them both at the RPC layer and at the NM proxy layer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-4476) Matcher for complex node label expresions

2016-01-07 Thread Chris Douglas (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Douglas updated YARN-4476:

Attachment: YARN-4476-2.patch

Fixed more checkstyle warnings. Diminishing returns on the remainder...

> Matcher for complex node label expresions
> -
>
> Key: YARN-4476
> URL: https://issues.apache.org/jira/browse/YARN-4476
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler
>Reporter: Chris Douglas
>Assignee: Chris Douglas
> Attachments: YARN-4476-0.patch, YARN-4476-1.patch, YARN-4476-2.patch
>
>
> Implementation of a matcher for complex node label expressions based on a 
> [paper|http://dl.acm.org/citation.cfm?id=1807171] from SIGMOD 2010.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (YARN-4558) Yarn client retries on some non-retriable exceptions

2016-01-07 Thread Sergey Shelukhin (JIRA)
Sergey Shelukhin created YARN-4558:
--

 Summary: Yarn client retries on some non-retriable exceptions
 Key: YARN-4558
 URL: https://issues.apache.org/jira/browse/YARN-4558
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 2.7.1
Reporter: Sergey Shelukhin
Priority: Minor


Seems the problem is in RMProxy where the policy is built.
{noformat}
Thread 23594: (state = BLOCKED)
- java.lang.Thread.sleep(long) @bci=0 (Interpreted frame)
- org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(java.lang.Object, 
java.lang.reflect.Method, java.lang.Object[]) @bci=603, line=155 (Interpreted 
frame)
- 
com.sun.proxy.$Proxy32.getClusterNodes(org.apache.hadoop.yarn.api.protocolrecords.GetClusterNodesRequest)
 @bci=16 (Interpreted frame)
- 
org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.getNodeReports(org.apache.hadoop.yarn.api.records.NodeState[])
 @bci=66, line=515 (Interpreted frame)
{noformat}
produces
{noformat}
2016-01-07 02:50:45,111 [main] WARN  ipc.Client - Exception encountered while 
connecting to the server : javax.security.sasl.SaslException: GSS initiate 
failed [Caused by GSSException: No valid credentials provided (Mechanism level: 
Failed to find any Kerberos tgt)]
2016-01-07 02:51:15,126 [main] WARN  ipc.Client - Exception encountered while 
connecting to the server : javax.security.sasl.SaslException: GSS initiate 
failed [Caused by GSSException: No valid credentials provided (Mechanism level: 
Failed to find any Kerberos tgt)]
...
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (YARN-4032) Corrupted state from a previous version can still cause RM to fail with NPE due to same reasons as YARN-2834

2016-01-07 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla resolved YARN-4032.

Resolution: Duplicate

YARN-4347 should fix this issue. Closing this as a duplicate. 

> Corrupted state from a previous version can still cause RM to fail with NPE 
> due to same reasons as YARN-2834
> 
>
> Key: YARN-4032
> URL: https://issues.apache.org/jira/browse/YARN-4032
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.7.1
>Reporter: Anubhav Dhoot
>Assignee: Jian He
>Priority: Critical
> Attachments: YARN-4032.prelim.patch
>
>
> YARN-2834 ensures in 2.6.0 there will not be any inconsistent state. But if 
> someone is upgrading from a previous version, the state can still be 
> inconsistent and then RM will still fail with NPE after upgrade to 2.6.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4438) Implement RM leader election with curator

2016-01-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15088213#comment-15088213
 ] 

Hadoop QA commented on YARN-4438:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
30s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 42s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 3s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
31s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 4s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
27s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
28s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 59s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 11s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
54s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 13s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 13s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 9s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 9s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 31s 
{color} | {color:red} Patch generated 4 new checkstyle issues in 
hadoop-yarn-project/hadoop-yarn (total was 315, now 318). {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 0s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
22s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
48s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 25s 
{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed 
with JDK v1.8.0_66. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 9s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 22s 
{color} | {color:green} hadoop-yarn-api in the patch passed with JDK v1.8.0_66. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 60m 6s 
{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed 
with JDK v1.8.0_66. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 23s 
{color} | {color:green} hadoop-yarn-api in the patch passed with JDK v1.7.0_91. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 61m 21s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK 
v1.7.0_91. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
19s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 157m 54s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.7.0_91 Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.fifo.TestFifoScheduler |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  

[jira] [Commented] (YARN-4438) Implement RM leader election with curator

2016-01-07 Thread Xuan Gong (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15088220#comment-15088220
 ] 

Xuan Gong commented on YARN-4438:
-

+1 lgtm. Checking this in.

> Implement RM leader election with curator
> -
>
> Key: YARN-4438
> URL: https://issues.apache.org/jira/browse/YARN-4438
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Jian He
>Assignee: Jian He
> Attachments: YARN-4438.1.patch, YARN-4438.2.patch, YARN-4438.3.patch, 
> YARN-4438.4.patch, YARN-4438.5.patch, YARN-4438.6.patch
>
>
> This is to implement the leader election with curator instead of the 
> ActiveStandbyElector from common package,  this also avoids adding more 
> configs in common to suit RM's own needs. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4335) Allow ResourceRequests to specify ExecutionType of a request ask

2016-01-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15088224#comment-15088224
 ] 

Hadoop QA commented on YARN-4335:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
55s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 59s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 14s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
30s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 26s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
38s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
20s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 24s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 50s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 57s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 1m 57s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 57s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 14s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 2m 14s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 14s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 29s 
{color} | {color:red} Patch generated 1 new checkstyle issues in 
hadoop-yarn-project/hadoop-yarn (total was 41, now 42). {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 21s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
33s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 3 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
43s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 24s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m 0s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 36s 
{color} | {color:green} hadoop-yarn-api in the patch passed with JDK v1.8.0_66. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 40s 
{color} | {color:green} hadoop-yarn-common in the patch passed with JDK 
v1.8.0_66. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 8m 7s {color} | 
{color:red} hadoop-yarn-client in the patch failed with JDK v1.8.0_66. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 32s 
{color} | {color:green} hadoop-yarn-api in the patch passed with JDK v1.7.0_91. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 23s 
{color} | {color:green} hadoop-yarn-common in 

[jira] [Commented] (YARN-4335) Allow ResourceRequests to specify ExecutionType of a request ask

2016-01-07 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15088101#comment-15088101
 ] 

Wangda Tan commented on YARN-4335:
--

+1 to latest patch, thanks [~kkaranasos]!

> Allow ResourceRequests to specify ExecutionType of a request ask
> 
>
> Key: YARN-4335
> URL: https://issues.apache.org/jira/browse/YARN-4335
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
> Attachments: YARN-4335-yarn-2877.001.patch, YARN-4335.002.patch, 
> YARN-4335.003.patch
>
>
> YARN-2882 introduced container types that are internal (not user-facing) and 
> are used by the ContainerManager during execution at the NM.
> With this JIRA we are introducing (user-facing) resource request types that 
> are used by the AM to specify the type of the ResourceRequest.
> We will initially support two resource request types: CONSERVATIVE and 
> OPTIMISTIC.
> CONSERVATIVE resource requests will be handed internally to containers of 
> GUARANTEED type, whereas OPTIMISTIC resource requests will be handed to 
> QUEUEABLE containers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3223) Resource update during NM graceful decommission

2016-01-07 Thread Brook Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15088097#comment-15088097
 ] 

Brook Zhou commented on YARN-3223:
--

Thanks [~djp] for the feedback. 

Those scenarios mentioned are indeed problematic. I think the proposal would 
end up making some changes to SchedulerNode and add more complexity there. It 
could end up being too much overhead in terms of maintaining more variables, 
and will still not solve the issues entirely due to the system still being only 
eventually consistent. 

Since CapacityScheduler.nodeUpdate is already synchronized, if we eliminated 
using the asynchronous RMNodeResourceUpdateEvent and just directly modify the 
decommissioning SchedulerNode using updateNodeAndQueueResource, we guarantee 
SchedulerNode's consistency. 

> Resource update during NM graceful decommission
> ---
>
> Key: YARN-3223
> URL: https://issues.apache.org/jira/browse/YARN-3223
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: graceful, nodemanager, resourcemanager
>Affects Versions: 2.7.1
>Reporter: Junping Du
>Assignee: Brook Zhou
> Attachments: YARN-3223-v0.patch, YARN-3223-v1.patch, 
> YARN-3223-v2.patch, YARN-3223-v3.patch
>
>
> During NM graceful decommission, we should handle resource update properly, 
> include: make RMNode keep track of old resource for possible rollback, keep 
> available resource to 0 and used resource get updated when
> container finished.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-4335) Allow ResourceRequests to specify ExecutionType of a request ask

2016-01-07 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4335?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-4335:
--
Description: 
YARN-2882 introduced container types that are internal (not user-facing) and 
are used by the ContainerManager during execution at the NM.

With this JIRA we are introducing (user-facing) resource request types that are 
used by the AM to specify the type of the ResourceRequest.

  was:
YARN-2882 introduced container types that are internal (not user-facing) and 
are used by the ContainerManager during execution at the NM.

With this JIRA we are introducing (user-facing) resource request types that are 
used by the AM to specify the type of the ResourceRequest.

We will initially support two resource request types: CONSERVATIVE and 
OPPORTUNISTIC.
CONSERVATIVE resource requests will be handed internally to containers of 
GUARANTEED type, whereas OPPORTUNISTIC resource requests will be handed to 
QUEUEABLE containers.


> Allow ResourceRequests to specify ExecutionType of a request ask
> 
>
> Key: YARN-4335
> URL: https://issues.apache.org/jira/browse/YARN-4335
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
> Attachments: YARN-4335-yarn-2877.001.patch, YARN-4335.002.patch, 
> YARN-4335.003.patch
>
>
> YARN-2882 introduced container types that are internal (not user-facing) and 
> are used by the ContainerManager during execution at the NM.
> With this JIRA we are introducing (user-facing) resource request types that 
> are used by the AM to specify the type of the ResourceRequest.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4335) Allow ResourceRequests to specify ExecutionType of a request ask

2016-01-07 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15088112#comment-15088112
 ] 

Arun Suresh commented on YARN-4335:
---

Thanks [~leftnoteasy] for the review and [~kkaranasos] for the patch. I will 
commit to yarn-2877 branch after jenkins.

> Allow ResourceRequests to specify ExecutionType of a request ask
> 
>
> Key: YARN-4335
> URL: https://issues.apache.org/jira/browse/YARN-4335
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
> Attachments: YARN-4335-yarn-2877.001.patch, YARN-4335.002.patch, 
> YARN-4335.003.patch
>
>
> YARN-2882 introduced container types that are internal (not user-facing) and 
> are used by the ContainerManager during execution at the NM.
> With this JIRA we are introducing (user-facing) resource request types that 
> are used by the AM to specify the type of the ResourceRequest.
> We will initially support two resource request types: CONSERVATIVE and 
> OPTIMISTIC.
> CONSERVATIVE resource requests will be handed internally to containers of 
> GUARANTEED type, whereas OPTIMISTIC resource requests will be handed to 
> QUEUEABLE containers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-4335) Allow ResourceRequests to specify ExecutionType of a request ask

2016-01-07 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4335?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-4335:
--
Description: 
YARN-2882 introduced container types that are internal (not user-facing) and 
are used by the ContainerManager during execution at the NM.

With this JIRA we are introducing (user-facing) resource request types that are 
used by the AM to specify the type of the ResourceRequest.

We will initially support two resource request types: CONSERVATIVE and 
OPPORTUNISTIC.
CONSERVATIVE resource requests will be handed internally to containers of 
GUARANTEED type, whereas OPPORTUNISTIC resource requests will be handed to 
QUEUEABLE containers.

  was:
YARN-2882 introduced container types that are internal (not user-facing) and 
are used by the ContainerManager during execution at the NM.

With this JIRA we are introducing (user-facing) resource request types that are 
used by the AM to specify the type of the ResourceRequest.

We will initially support two resource request types: CONSERVATIVE and 
OPTIMISTIC.
CONSERVATIVE resource requests will be handed internally to containers of 
GUARANTEED type, whereas OPTIMISTIC resource requests will be handed to 
QUEUEABLE containers.


> Allow ResourceRequests to specify ExecutionType of a request ask
> 
>
> Key: YARN-4335
> URL: https://issues.apache.org/jira/browse/YARN-4335
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
> Attachments: YARN-4335-yarn-2877.001.patch, YARN-4335.002.patch, 
> YARN-4335.003.patch
>
>
> YARN-2882 introduced container types that are internal (not user-facing) and 
> are used by the ContainerManager during execution at the NM.
> With this JIRA we are introducing (user-facing) resource request types that 
> are used by the AM to specify the type of the ResourceRequest.
> We will initially support two resource request types: CONSERVATIVE and 
> OPPORTUNISTIC.
> CONSERVATIVE resource requests will be handed internally to containers of 
> GUARANTEED type, whereas OPPORTUNISTIC resource requests will be handed to 
> QUEUEABLE containers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4496) Improve HA ResourceManager Failover detection on the client

2016-01-07 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15088251#comment-15088251
 ] 

Jian He commented on YARN-4496:
---

+1, [~asuresh], are you working on this ?

> Improve HA ResourceManager Failover detection on the client
> ---
>
> Key: YARN-4496
> URL: https://issues.apache.org/jira/browse/YARN-4496
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: client, resourcemanager
>Reporter: Arun Suresh
>Assignee: Arun Suresh
>
> HDFS deployments can currently use the {{RequestHedgingProxyProvider}} to 
> improve Namenode failover detection in the client. It does this by 
> concurrently trying all namenodes and picks the namenode that returns the 
> fastest with a successful response as the active node.
> It would be useful to have a similar ProxyProvider for the Yarn RM (it can 
> possibly be done by converging some the class hierarchies to use the same 
> ProxyProvider)
> This would especially be useful for large YARN deployments with multiple 
> standby RMs where clients will be able to pick the active RM without having 
> to traverse a list of configured RMs. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2962) ZKRMStateStore: Limit the number of znodes under a znode

2016-01-07 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15088281#comment-15088281
 ] 

Varun Saxena commented on YARN-2962:


[~kasha], [~jianhe], [~vinodkv], [~asuresh], kindly review.
Will address checkstyle in next patch.

> ZKRMStateStore: Limit the number of znodes under a znode
> 
>
> Key: YARN-2962
> URL: https://issues.apache.org/jira/browse/YARN-2962
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Affects Versions: 2.6.0
>Reporter: Karthik Kambatla
>Assignee: Varun Saxena
>Priority: Critical
> Attachments: YARN-2962.01.patch, YARN-2962.04.patch, 
> YARN-2962.2.patch, YARN-2962.3.patch
>
>
> We ran into this issue where we were hitting the default ZK server message 
> size configs, primarily because the message had too many znodes even though 
> they individually they were all small.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4438) Implement RM leader election with curator

2016-01-07 Thread Xuan Gong (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15088285#comment-15088285
 ] 

Xuan Gong commented on YARN-4438:
-

Committed into trunk/branch-2. Thanks, Jian. And thanks for the review, Karthik

> Implement RM leader election with curator
> -
>
> Key: YARN-4438
> URL: https://issues.apache.org/jira/browse/YARN-4438
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Jian He
>Assignee: Jian He
> Fix For: 2.9.0
>
> Attachments: YARN-4438.1.patch, YARN-4438.2.patch, YARN-4438.3.patch, 
> YARN-4438.4.patch, YARN-4438.5.patch, YARN-4438.6.patch
>
>
> This is to implement the leader election with curator instead of the 
> ActiveStandbyElector from common package,  this also avoids adding more 
> configs in common to suit RM's own needs. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4553) Add cgroups support for docker containers

2016-01-07 Thread Sidharta Seethana (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15088302#comment-15088302
 ] 

Sidharta Seethana commented on YARN-4553:
-

Thanks for the review, [~vvasudev]. The log lines weren't added in this patch - 
just re-indented. In any case, I'll make the fixes you proposed and upload a 
new patch. 

> Add cgroups support for docker containers
> -
>
> Key: YARN-4553
> URL: https://issues.apache.org/jira/browse/YARN-4553
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Sidharta Seethana
>Assignee: Sidharta Seethana
> Attachments: YARN-4553.001.patch, YARN-4553.002.patch
>
>
> Currently, cgroups-based resource isolation does not work with docker 
> containers under YARN. The processes in these containers are launched by the 
> docker daemon and they are not children of a container-executor process. 
> Docker supports a --cgroup-parent flag which can be used to point to the 
> container-specific cgroups that are created by the nodemanager. This will 
> allow the Nodemanager to manage cgroups (as it does today) while allowing 
> resource isolation to work with docker containers. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >