[jira] [Updated] (YARN-6253) FlowAcitivityColumnPrefix.store(byte[] rowKey, ...) drops timestamp

2017-08-15 Thread Vrushali C (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6253?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vrushali C updated YARN-6253:
-
Labels: atsv2-hbase yarn-5355-merge-blocker  (was: yarn-5355-merge-blocker)

> FlowAcitivityColumnPrefix.store(byte[] rowKey, ...) drops timestamp
> ---
>
> Key: YARN-6253
> URL: https://issues.apache.org/jira/browse/YARN-6253
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-alpha2
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>  Labels: atsv2-hbase, yarn-5355-merge-blocker
> Fix For: YARN-5355
>
> Attachments: YARN-6253.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6316) Provide help information and documentation for TimelineSchemaCreator

2017-08-15 Thread Vrushali C (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6316?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vrushali C updated YARN-6316:
-
Labels: atsv2-hbase  (was: )

> Provide help information and documentation for TimelineSchemaCreator
> 
>
> Key: YARN-6316
> URL: https://issues.apache.org/jira/browse/YARN-6316
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Li Lu
>Assignee: Haibo Chen
>  Labels: atsv2-hbase
> Fix For: YARN-5355, YARN-5355-branch-2, 3.0.0-alpha4
>
> Attachments: YARN-6316.00.patch, YARN-6316.prelim.patch
>
>
> Right now there is no help information for timeline schema creator. We may 
> probably want to provide an option to print help. Also, ideally, if users 
> passed in no argument, we may want to print out help, instead of directly 
> create the tables. This will simplify cluster operations and timeline v2 
> deployments. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5980) Update documentation for single node hbase deploy

2017-08-15 Thread Vrushali C (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5980?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vrushali C updated YARN-5980:
-
Labels: atsv2-hbase yarn-5355-merge-blocker  (was: yarn-5355-merge-blocker)

> Update documentation for single node hbase deploy
> -
>
> Key: YARN-5980
> URL: https://issues.apache.org/jira/browse/YARN-5980
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Vrushali C
>  Labels: atsv2-hbase, yarn-5355-merge-blocker
> Fix For: YARN-5355
>
> Attachments: YARN-5980.001.patch, YARN-5980.002.patch, 
> YARN-5980.003.patch, YARN-5980.004.patch
>
>
> Per HBASE-17272, a single node hbase deployment (single jvm running daemons + 
> hdfs writes) will be added to hbase shortly. 
> We should update the timeline service documentation in the setup/deployment 
> context accordingly, this will help users who are a bit wary of hbase 
> deployments help get started with timeline service more easily.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6094) Update the coprocessor to be a dynamically loaded one

2017-08-15 Thread Vrushali C (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vrushali C updated YARN-6094:
-
Labels: atsv2-hbase yarn-5355-merge-blocker  (was: yarn-5355-merge-blocker)

> Update the coprocessor to be a dynamically loaded one
> -
>
> Key: YARN-6094
> URL: https://issues.apache.org/jira/browse/YARN-6094
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Vrushali C
>  Labels: atsv2-hbase, yarn-5355-merge-blocker
> Fix For: YARN-5355
>
> Attachments: YARN-6094.001.patch, YARN-6094-YARN-5355.001.patch, 
> YARN-6094-YARN-5355.002.patch, YARN-6094-YARN-5355.003.patch, 
> YARN-6094-YARN-5355.004.patch
>
>
> The timeline service v2 code base on yarn now uses hbase 1.2.4 after 
> YARN-5976. 
> With this version of hbase, system classes (starting with org.apache.hadoop) 
> can be loaded as table level coprocessors. Hence we should update the 
> timeline service coprocessor to be a dynamically loaded one instead of static 
> loading. 
> This involves code changes as well as documentation updates for deployment.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7021) TestResourceUtils to be moved to hadoop-yarn-api package

2017-08-15 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7021?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-7021:
--
Summary: TestResourceUtils to be moved to hadoop-yarn-api package  (was: 
TestResourceUtils to be moved to yarn-common)

> TestResourceUtils to be moved to hadoop-yarn-api package
> 
>
> Key: YARN-7021
> URL: https://issues.apache.org/jira/browse/YARN-7021
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Sunil G
>
> ResourceUtils class is now in yarn-api. Its better its test class also to be 
> moved there, however these tests using lot of resources and using 
> ConfigurationProvider which is available only in yarn-common.  Hence 
> investigate and improve test for ResourceUtils class.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7021) TestResourceUtils to be moved to yarn-common

2017-08-15 Thread Sunil G (JIRA)
Sunil G created YARN-7021:
-

 Summary: TestResourceUtils to be moved to yarn-common
 Key: YARN-7021
 URL: https://issues.apache.org/jira/browse/YARN-7021
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Sunil G


ResourceUtils class is now in yarn-api. Its better its test class also to be 
moved there, however these tests using lot of resources and using 
ConfigurationProvider which is available only in yarn-common.  Hence 
investigate and improve test for ResourceUtils class.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6781) ResourceUtils.initializeResourcesMap() takes an unnecessary Map parameter

2017-08-15 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16128362#comment-16128362
 ] 

Sunil G commented on YARN-6781:
---

Sorry. During commit, I tried to compile whole hadoop and found compilation is 
broken. In earlier jira, we change ResourceUtils to yarn-api however its test 
class was left in hadoop-common itself. I ll raise a ticket to track it 
separately.

For now, please help to share a new patch. Error traces are below, you need to 
remove the extra argument from {{TestResourceUtils}} as well.
{noformat}
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-compiler-plugin:3.1:testCompile 
(default-testCompile) on project hadoop-yarn-common: Compilation failure: 
Compilation failure:
[ERROR] 
/Users/sunilgovindan/Work/hadoop/commit/sb_trunk/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/resource/TestResourceUtils.java:[187,20]
 method initializeResourcesMap in class 
org.apache.hadoop.yarn.util.resource.ResourceUtils cannot be applied to given 
types;
[ERROR] required: org.apache.hadoop.conf.Configuration
[ERROR] found: 
org.apache.hadoop.conf.Configuration,java.util.Map
[ERROR] reason: actual and formal argument lists differ in length
[ERROR] 
/Users/sunilgovindan/Work/hadoop/commit/sb_trunk/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/resource/TestResourceUtils.java:[253,22]
 method initializeResourcesMap in class 
org.apache.hadoop.yarn.util.resource.ResourceUtils cannot be applied to given 
types;
[ERROR] required: org.apache.hadoop.conf.Configuration
[ERROR] found: 
org.apache.hadoop.conf.Configuration,java.util.Map
[ERROR] reason: actual and formal argument lists differ in length
{noformat}

> ResourceUtils.initializeResourcesMap() takes an unnecessary Map parameter
> -
>
> Key: YARN-6781
> URL: https://issues.apache.org/jira/browse/YARN-6781
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Affects Versions: YARN-3926
>Reporter: Daniel Templeton
>Assignee: Yu-Tang Lin
>Priority: Minor
>  Labels: newbie
> Fix For: YARN-3926
>
> Attachments: YARN-6781.001.patch, YARN-6781-YARN-3926.002.patch
>
>
> The {{resourceInformationMap}} parameter is always passed in as a new {{Map}} 
> object, and it's never referenced again after the call.  The parameter can be 
> eliminated.  Instead the {{Map}} can be created inside the 
> {{initializeResourcesMap()}} method.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org




[jira] [Updated] (YARN-3649) Allow configurable prefix for hbase table names (like prod, exp, test etc)

2017-08-15 Thread Vrushali C (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vrushali C updated YARN-3649:
-
Labels: YARN-5355 atsv2-hbase  (was: YARN-5355)

> Allow configurable prefix for hbase table names (like prod, exp, test etc)
> --
>
> Key: YARN-3649
> URL: https://issues.apache.org/jira/browse/YARN-3649
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Vrushali C
>  Labels: YARN-5355, atsv2-hbase
> Fix For: YARN-5355
>
> Attachments: YARN-3649-YARN-2928.01.patch, 
> YARN-3649-YARN-5355.002.patch, YARN-3649-YARN-5355.003.patch, 
> YARN-3649-YARN-5355.004.patch, YARN-3649-YARN-5355.005.patch, 
> YARN-3649-YARN-5355.01.patch
>
>
> As per [~jrottinghuis]'s suggestion in YARN-3411, it will be a good idea to 
> have a configurable prefix for hbase table names.  
> This way we can easily run a staging, a test, a production and whatever setup 
> in the same HBase instance / without having to override every single table in 
> the config.
> One could simply overwrite the default prefix and you're off and running.
> For prefix, potential candidates are "tst" "prod" "exp" etc. Once can then 
> still override one tablename if needed, but managing one whole setup will be 
> easier.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5229) Refactor #isApplicationEntity and #getApplicationEvent from HBaseTimelineWriterImpl

2017-08-15 Thread Vrushali C (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5229?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vrushali C updated YARN-5229:
-
Labels: YARN-5355 atsv2-hbase  (was: YARN-5355)

> Refactor #isApplicationEntity and #getApplicationEvent from 
> HBaseTimelineWriterImpl
> ---
>
> Key: YARN-5229
> URL: https://issues.apache.org/jira/browse/YARN-5229
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928, YARN-5355
>Reporter: Joep Rottinghuis
>Assignee: Vrushali C
>Priority: Minor
>  Labels: YARN-5355, atsv2-hbase
> Fix For: YARN-5355
>
> Attachments: YARN-229-YARN-5355.01.patch, 
> YARN-5229-YARN-2928.01.patch, YARN-5229-YARN-2928.02.patch, 
> YARN-5229-YARN-2928.03.patch, YARN-5229-YARN-2928.04.patch
>
>
> As per [~gtCarrera9] commented in YARN-5170
> bq. In HBaseTimelineWriterImpl isApplicationEntity and getApplicationEvent 
> seem to be awkward. Looks more like something related to TimelineEntity or 
> ApplicationEntity
> In YARN-5170 we just made the method private, and in this separate jira we 
> can refactor these methods to TimelineEntity or ApplicationEntity.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6861) Reader API for sub application entities

2017-08-15 Thread Vrushali C (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6861?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vrushali C updated YARN-6861:
-
Labels: atsv2-subapp  (was: )

> Reader API for sub application entities
> ---
>
> Key: YARN-6861
> URL: https://issues.apache.org/jira/browse/YARN-6861
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelinereader
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>  Labels: atsv2-subapp
> Attachments: YARN-6861-YARN-5355.001.patch, 
> YARN-6861-YARN-5355.002.patch
>
>
> YARN-6733 and YARN-6734 writes data into sub application table. There should 
> be a way to read those entities.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6734) Ensure sub-application user is extracted & sent to timeline service

2017-08-15 Thread Vrushali C (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6734?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vrushali C updated YARN-6734:
-
Labels: atsv2-subapp  (was: )

> Ensure sub-application user is extracted & sent to timeline service
> ---
>
> Key: YARN-6734
> URL: https://issues.apache.org/jira/browse/YARN-6734
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Rohith Sharma K S
>  Labels: atsv2-subapp
> Fix For: YARN-5355, YARN-5355-branch-2
>
> Attachments: YARN-6734-YARN-5355.001.patch, 
> YARN-6734-YARN-5355.002.patch, YARN-6734-YARN-5355.003.patch, 
> YARN-6734-YARN-5355.004.patch
>
>
> After a discussion with Tez folks, we have been thinking over introducing a 
> table to store  sub-application information. YARN-6733
> For example, if a Tez session runs for a certain period as User X and runs a 
> few AMs. These AMs accept DAGs from other users. Tez will execute these dags 
> with a doAs user. ATSv2 should store this information in a new table perhaps 
> called as "sub_application" table. 
> YARN-6733 tracks the code changes needed for  table schema creation.
> This jira tracks writing to that table, updating the user name fields to 
> include sub-application user etc. This would mean adding a field to Flow 
> Context which can store an additional user 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6318) timeline service schema creator fails if executed from a remote machine

2017-08-15 Thread Vrushali C (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6318?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vrushali C updated YARN-6318:
-
Labels: atsv2-hbase yarn-5355-merge-blocker  (was: yarn-5355-merge-blocker)

> timeline service schema creator fails if executed from a remote machine
> ---
>
> Key: YARN-6318
> URL: https://issues.apache.org/jira/browse/YARN-6318
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: 3.0.0-alpha1
>Reporter: Sangjin Lee
>Assignee: Sangjin Lee
>Priority: Minor
>  Labels: atsv2-hbase, yarn-5355-merge-blocker
> Fix For: YARN-5355, YARN-5355-branch-2
>
> Attachments: YARN-6318-YARN-5355.01.patch, 
> YARN-6318-YARN-5355.02.patch
>
>
> The timeline service schema creator fails if executed from a remote machine 
> and the remote machine does not have the right {{hbase-site.xml}} file to 
> talk to that remote HBase cluster.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6850) Ensure that supplemented timestamp is stored only for flow run metrics

2017-08-15 Thread Vrushali C (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vrushali C updated YARN-6850:
-
Labels: atsv2-hbase yarn-5355-merge-blocker  (was: yarn-5355-merge-blocker)

> Ensure that supplemented timestamp is stored only for flow run metrics
> --
>
> Key: YARN-6850
> URL: https://issues.apache.org/jira/browse/YARN-6850
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Varun Saxena
>  Labels: atsv2-hbase, yarn-5355-merge-blocker
> Fix For: YARN-5355, YARN-5355-branch-2
>
> Attachments: YARN-6850-YARN-5355.01.patch
>
>
> In timeline service v2,  ColumnHelper#getPutTimestamp supplements the 
> timestamp and is called by ColumnHelper#store. This is not conditional and 
> called for every put.
> We need to ensure that the cell timestamps for metrics in entity and 
> application (and sub application) tables are "correct" timestamps since we 
> will be enabling TTLs for these cells. 
> The supplemented timestamp is to be used only in the flow run table by the 
> coprocessor which intercepts all reads & writes to cells in this table. It 
> looks at the supplemented timestamp to figure out which app id this 
> particular cell belongs to. This is done in order to ensure no collision 
> occurs when two apps belonging to same flow run write the same metric at the 
> same timestamp. 
> Discovered in the discussion in YARN-4455 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6874) Supplement timestamp for min start/max end time columns in flow run table to avoid overwrite

2017-08-15 Thread Vrushali C (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6874?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vrushali C updated YARN-6874:
-
Labels: atsv2-hbase  (was: )

> Supplement timestamp for min start/max end time columns in flow run table to 
> avoid overwrite
> 
>
> Key: YARN-6874
> URL: https://issues.apache.org/jira/browse/YARN-6874
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Varun Saxena
>Assignee: Vrushali C
>  Labels: atsv2-hbase
> Fix For: YARN-5355, YARN-5355-branch-2
>
> Attachments: YARN-6874-YARN-5355.0001.patch
>
>
> Following test case is failing in YARN-5355 branch.
> This is coming because we are not supplementing the timestamp for 
> FlowRunColumn i.e. min_start_time and max_end_time columns, post YARN-6850 
> which can lead to a clash, if 2 writes for app created events happen at the 
> same time, which is true for this test case.
> To fix this, we need to pass true flag into ColumnHelper constructor. I did 
> encounter this failure once earlier too.
> {noformat}
> testWriteFlowRunMinMax(org.apache.hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowRun)
>   Time elapsed: 0.088 sec  <<< FAILURE!
> java.lang.AssertionError: expected:<142502690> but was:<1425026901000>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> org.apache.hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowRun.testWriteFlowRunMinMax(TestHBaseStorageFlowRun.java:237)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6707) [ATSv2] Update HBase version to 1.2.6

2017-08-15 Thread Vrushali C (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6707?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vrushali C updated YARN-6707:
-
Labels: atsv2-hbase  (was: )

> [ATSv2] Update HBase version to 1.2.6
> -
>
> Key: YARN-6707
> URL: https://issues.apache.org/jira/browse/YARN-6707
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-5355
>Reporter: Varun Saxena
>Assignee: Vrushali C
>  Labels: atsv2-hbase
> Fix For: YARN-5355, YARN-5355-branch-2, 3.0.0-alpha4
>
> Attachments: YARN-6707-YARN-5355.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6733) Add table for storing sub-application entities

2017-08-15 Thread Vrushali C (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6733?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vrushali C updated YARN-6733:
-
Labels: atsv2-subapp  (was: )

> Add table for storing sub-application entities
> --
>
> Key: YARN-6733
> URL: https://issues.apache.org/jira/browse/YARN-6733
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Vrushali C
>  Labels: atsv2-subapp
> Fix For: YARN-5355, YARN-5355-branch-2
>
> Attachments: IMG_7040.JPG, YARN-6733-YARN-5355.001.patch, 
> YARN-6733-YARN-5355.002.patch, YARN-6733-YARN-5355.003.patch, 
> YARN-6733-YARN-5355.004.patch, YARN-6733-YARN-5355.005.patch, 
> YARN-6733-YARN-5355.006.patch, YARN-6733-YARN-5355.007.patch, 
> YARN-6733-YARN-5355.008.patch
>
>
> After a discussion with Tez folks, we have been thinking over introducing a 
> table to store  sub-application information.
> For example, if a Tez session runs for a certain period as User X and runs a 
> few AMs. These AMs accept DAGs from other users. Tez will execute these dags 
> with a doAs user. ATSv2 should store this information in a new table perhaps 
> called as "sub_application" table. 
> This jira tracks the code changes needed for  table schema creation.
> I will file other jiras for writing to that table, updating the user name 
> fields to include sub-application user etc.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6820) Restrict read access to timelineservice v2 data

2017-08-15 Thread Vrushali C (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vrushali C updated YARN-6820:
-
Component/s: (was: timelineserver)
 timelinereader

> Restrict read access to timelineservice v2 data 
> 
>
> Key: YARN-6820
> URL: https://issues.apache.org/jira/browse/YARN-6820
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelinereader
>Reporter: Vrushali C
>Assignee: Vrushali C
>  Labels: yarn-5355-merge-blocker
> Fix For: YARN-5355, YARN-5355-branch-2
>
> Attachments: YARN-6820-YARN-5355.0001.patch, 
> YARN-6820-YARN-5355.002.patch, YARN-6820-YARN-5355.003.patch, 
> YARN-6820-YARN-5355.004.patch, YARN-6820-YARN-5355.005.patch, 
> YARN-6820-YARN-5355_branch_2.patch
>
>
> Need to provide a way to restrict read access in ATSv2. Not all users should 
> be able to read all entities. On the flip side, some folks may not need any 
> read restrictions, so we need to provide a way to disable this access 
> restriction as well. 
> Initially this access restriction could be done in a simple way via a 
> whitelist of users allowed to read data. That set of users can read all data, 
> no other user can read any data. Can be turned off for all users to read all 
> data.
> Could be stored in a "domain" table in hbase perhaps. Or a configuration 
> setting for the cluster. Or something else that's simple enough. ATSv1 has a 
> concept of domain for isolating users for reading. Would be good to keep that 
> in consideration. 
> In ATSv1, domain offers a namespace for Timeline server allowing users to 
> host multiple entities, isolating them from other users and applications. A 
> “Domain” in ATSV1 primarily stores owner info, read and& write ACL 
> information, created and modified time stamp information. Each Domain is 
> identified by an ID which must be unique across all users in the YARN cluster.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6781) ResourceUtils.initializeResourcesMap() takes an unnecessary Map parameter

2017-08-15 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16128350#comment-16128350
 ] 

Sunil G commented on YARN-6781:
---

Jenkins came clean. Looks fine to me as well. 
Committing the same as post this ticket, some other jiras needs rebase.

Thanks [~Yu-Tang Lin] and [~templedf]

> ResourceUtils.initializeResourcesMap() takes an unnecessary Map parameter
> -
>
> Key: YARN-6781
> URL: https://issues.apache.org/jira/browse/YARN-6781
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Affects Versions: YARN-3926
>Reporter: Daniel Templeton
>Assignee: Yu-Tang Lin
>Priority: Minor
>  Labels: newbie
> Fix For: YARN-3926
>
> Attachments: YARN-6781.001.patch, YARN-6781-YARN-3926.002.patch
>
>
> The {{resourceInformationMap}} parameter is always passed in as a new {{Map}} 
> object, and it's never referenced again after the call.  The parameter can be 
> eliminated.  Instead the {{Map}} can be created inside the 
> {{initializeResourcesMap()}} method.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6376) Exceptions caused by synchronous putEntities requests can be swallowed

2017-08-15 Thread Vrushali C (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6376?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vrushali C updated YARN-6376:
-
Labels: atsv2-hbase yarn-5355-merge-blocker  (was: yarn-5355-merge-blocker)

> Exceptions caused by synchronous putEntities requests can be swallowed
> --
>
> Key: YARN-6376
> URL: https://issues.apache.org/jira/browse/YARN-6376
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: ATSv2
>Affects Versions: 3.0.0-alpha2
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>Priority: Critical
>  Labels: atsv2-hbase, yarn-5355-merge-blocker
> Fix For: YARN-5355, YARN-5355-branch-2, 3.0.0-alpha4
>
> Attachments: YARN-6376.00.patch
>
>
> TimelineCollector.putEntitities() is currently implemented by calling 
> TimelineWriter.write() followed by TimelineWriter.flush(). Given 
> HBaseTimelineWriter.write() is an asynchronous operation, it is possible that 
> TimelineClient sends a synchronous putEntities() request for critical data, 
> but never gets back an exception even though the HBase write request to store 
> the entities may have failed. 
> This is due to a race condition between the WriterFlushThread in 
> TimelineCollectorManager and web threads handling synchronous putEntities() 
> requests. Entities are first put into the buffer by the web thread, it is 
> possible that before the web thread invokes writer.flush(), WriterFlushThread 
> is fired up to flush the writer. If the entities were not successfully 
> written to the backend during flush, the WriterFlushThread would just simply 
> log an error, whereas the web thread would never get an exception out from 
> its writer.flush() invocation. This is bad because the reason of 
> TimelineClient sending synchronously putEntities() is to retry upon any 
> exception.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6959) RM may allocate wrong AM Container for new attempt

2017-08-15 Thread Yuqi Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16128348#comment-16128348
 ] 

Yuqi Wang commented on YARN-6959:
-

[~jianhe]

I updated the new patch for 2.7, do you know how to trigger jenkins against 
branch-2.7?

> RM may allocate wrong AM Container for new attempt
> --
>
> Key: YARN-6959
> URL: https://issues.apache.org/jira/browse/YARN-6959
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler, fairscheduler, scheduler
>Affects Versions: 2.7.1
>Reporter: Yuqi Wang
>Assignee: Yuqi Wang
>  Labels: patch
> Fix For: 2.8.0, 2.7.1, 3.0.0-alpha4
>
> Attachments: YARN-6959.001.patch, YARN-6959.002.patch, 
> YARN-6959.003.patch, YARN-6959.004.patch, YARN-6959.005.patch, 
> YARN-6959-branch-2.7.001.patch, YARN-6959-branch-2.7.002.patch, 
> YARN-6959-branch-2.7.003.patch, YARN-6959-branch-2.8.001.patch, 
> YARN-6959.yarn_nm.log.zip, YARN-6959.yarn_rm.log.zip
>
>
> *Issue Summary:*
> Previous attempt ResourceRequest may be recorded into current attempt 
> ResourceRequests. These mis-recorded ResourceRequests may confuse AM 
> Container Request and Allocation for current attempt.
> *Issue Pipeline:*
> {code:java}
> // Executing precondition check for the incoming attempt id.
> ApplicationMasterService.allocate() ->
> scheduler.allocate(attemptId, ask, ...) ->
> // Previous precondition check for the attempt id may be outdated here, 
> // i.e. the currentAttempt may not be the corresponding attempt of the 
> attemptId.
> // Such as the attempt id is corresponding to the previous attempt.
> currentAttempt = scheduler.getApplicationAttempt(attemptId) ->
> // Previous attempt ResourceRequest may be recorded into current attempt 
> ResourceRequests
> currentAttempt.updateResourceRequests(ask) ->
> // RM may allocate wrong AM Container for the current attempt, because its 
> ResourceRequests
> // may come from previous attempt which can be any ResourceRequests previous 
> AM asked
> // and there is not matching logic for the original AM Container 
> ResourceRequest and 
> // the returned amContainerAllocation below.
> AMContainerAllocatedTransition.transition(...) ->
> amContainerAllocation = scheduler.allocate(currentAttemptId, ...)
> {code}
> *Patch Correctness:*
> Because after this Patch, RM will definitely record ResourceRequests from 
> different attempt into different objects of 
> SchedulerApplicationAttempt.AppSchedulingInfo.
> So, even if RM still record ResourceRequests from old attempt at any time, 
> these ResourceRequests will be recorded in old AppSchedulingInfo object which 
> will not impact current attempt's resource requests and allocation.
> *Concerns:*
> The getApplicationAttempt function in AbstractYarnScheduler is so confusing, 
> we should better rename it to getCurrentApplicationAttempt. And reconsider 
> whether there are any other bugs related to getApplicationAttempt.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6965) Duplicate instantiation in FairSchedulerQueueInfo

2017-08-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16128343#comment-16128343
 ] 

Hudson commented on YARN-6965:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #12195 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/12195/])
YARN-6965. Duplicate instantiation in FairSchedulerQueueInfo. (aajisaka: rev 
588c190afd49bdbd5708f7805bf6c68f09fee142)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/FairSchedulerQueueInfo.java


> Duplicate instantiation in FairSchedulerQueueInfo
> -
>
> Key: YARN-6965
> URL: https://issues.apache.org/jira/browse/YARN-6965
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 3.0.0-alpha3
>Reporter: Masahiro Tanaka
>Assignee: Masahiro Tanaka
>Priority: Minor
>  Labels: newbie
> Fix For: 2.9.0, 3.0.0-beta1, 2.8.3
>
> Attachments: YARN-6965.0.patch
>
>
> There is a duplicate instantiation in FairSchedulerQueueInfo.java
> https://github.com/apache/hadoop/blob/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/FairSchedulerQueueInfo.java#L102-L105
> I think this is not a big issue, but we should fix this in order to avoid 
> confusion.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5928) Move ATSv2 HBase backend code into a new module that is only dependent at runtime by yarn servers

2017-08-15 Thread Vrushali C (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vrushali C updated YARN-5928:
-
Labels: atsv2-hbase  (was: )

> Move ATSv2 HBase backend code into a new module that is only dependent at 
> runtime by yarn servers
> -
>
> Key: YARN-5928
> URL: https://issues.apache.org/jira/browse/YARN-5928
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Affects Versions: 3.0.0-alpha1
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>  Labels: atsv2-hbase
> Fix For: 3.0.0-alpha2, YARN-5355
>
> Attachments: YARN-5928.01.patch, YARN-5928.02.patch, 
> YARN-5928.06.patch, YARN-5928-YARN-5355.02.patch, 
> YARN-5928-YARN-5355.03.patch, YARN-5928-YARN-5355.04.patch, 
> YARN-5928-YARN-5355.04.patch, YARN-5928-YARN-5355.05.patch, 
> YARN-5928-YARN-5355.06.patch, YARN-5928-YARN-5355.07.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6979) Add flag to allow all container updates to be initiated via NodeHeartbeatResponse

2017-08-15 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6979?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-6979:
--
Description: 
Currently, only the Container Resource increase command is sent to the NM via 
NodeHeartbeat response. This JIRA proposes to add a flag in the RM to allow ALL 
container updates (increase, decrease, promote and demote) to initiated via 
node HB.

The AM is still free to use the ContainerManagementPrototol's 
{{updateContainer}} API in cases where for instance, the Node HB frequency is 
very low and the AM needs to update the container as soon as possible. In these 
situations, if the Node HB arrives before the updateContainer API call, the 
call would error out, due to a version mismatch and the AM is required to 
handle it.

  was:
Currently, only the Container Resource increase command is sent to the NM via 
NodeHeartbeat response. This JIRA proposes to add a flag in the RM to allow ALL 
container updates (increase, decrease, promote and demote) to initiated via 
node HB.

The AM is still free to use the ContainerManagementPrototol's 
{{updateContainer}} API in cases where for instance, the Node HB is frequency 
is very low and the AM needs to update the container as soon as possible. In 
these situations, if the Node HB arrives before the updateContainer API call, 
the call would error out, due to a version mismatch and the AM is required to 
handle it.


> Add flag to allow all container updates to be initiated via 
> NodeHeartbeatResponse
> -
>
> Key: YARN-6979
> URL: https://issues.apache.org/jira/browse/YARN-6979
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: kartheek muthyala
>
> Currently, only the Container Resource increase command is sent to the NM via 
> NodeHeartbeat response. This JIRA proposes to add a flag in the RM to allow 
> ALL container updates (increase, decrease, promote and demote) to initiated 
> via node HB.
> The AM is still free to use the ContainerManagementPrototol's 
> {{updateContainer}} API in cases where for instance, the Node HB frequency is 
> very low and the AM needs to update the container as soon as possible. In 
> these situations, if the Node HB arrives before the updateContainer API call, 
> the call would error out, due to a version mismatch and the AM is required to 
> handle it.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6804) Allow custom hostname for docker containers in native services

2017-08-15 Thread Varun Vasudev (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16128329#comment-16128329
 ] 

Varun Vasudev commented on YARN-6804:
-

[~jianhe] - can you backport the NM pieces to branch-2 and branch-2.8? It's 
blocking a bunch of other backports. Thanks!

cc [~shaneku...@gmail.com], [~leftnoteasy]

> Allow custom hostname for docker containers in native services
> --
>
> Key: YARN-6804
> URL: https://issues.apache.org/jira/browse/YARN-6804
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-native-services
>Reporter: Billie Rinaldi
>Assignee: Billie Rinaldi
> Fix For: yarn-native-services, 3.0.0-beta1
>
> Attachments: YARN-6804-trunk.004.patch, YARN-6804-trunk.005.patch, 
> YARN-6804-yarn-native-services.001.patch, 
> YARN-6804-yarn-native-services.002.patch, 
> YARN-6804-yarn-native-services.003.patch, 
> YARN-6804-yarn-native-services.004.patch, 
> YARN-6804-yarn-native-services.005.patch
>
>
> Instead of the default random docker container hostname, we could set a more 
> user-friendly hostname for the container. The default could be a hostname 
> based on the container ID, with an option for the AM to provide a different 
> hostname. In the case of the native services AM, we could provide the 
> hostname that would be created by the registry DNS server. Regardless of 
> whether or not registry DNS is enabled, this would be a more useful hostname 
> for the docker container.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6988) container-executor fails for docker when command length > 4096 B

2017-08-15 Thread Varun Vasudev (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16128325#comment-16128325
 ] 

Varun Vasudev commented on YARN-6988:
-

Sounds good [~ebadger].

> container-executor fails for docker when command length > 4096 B
> 
>
> Key: YARN-6988
> URL: https://issues.apache.org/jira/browse/YARN-6988
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Eric Badger
>Assignee: Eric Badger
> Attachments: YARN-6988.001.patch
>
>
> {{run_docker}} and {{launch_docker_container_as_user}} allocate their command 
> arrays using EXECUTOR_PATH_MAX, which is hardcoded to 4096 in 
> configuration.h. Because of this, the full docker command can only be 4096 
> characters. If it is longer, it will be truncated and the command will fail 
> with a parsing error. Because of the bind-mounting of volumes, the arguments 
> to the docker command can quickly get large. For example, I passed the 4096 
> limit with an 11 disk node. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6965) Duplicate instantiation in FairSchedulerQueueInfo

2017-08-15 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16128323#comment-16128323
 ] 

Akira Ajisaka commented on YARN-6965:
-

+1, checking this in.

> Duplicate instantiation in FairSchedulerQueueInfo
> -
>
> Key: YARN-6965
> URL: https://issues.apache.org/jira/browse/YARN-6965
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 3.0.0-alpha3
>Reporter: Masahiro Tanaka
>Assignee: Masahiro Tanaka
>Priority: Minor
>  Labels: newbie
> Attachments: YARN-6965.0.patch
>
>
> There is a duplicate instantiation in FairSchedulerQueueInfo.java
> https://github.com/apache/hadoop/blob/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/FairSchedulerQueueInfo.java#L102-L105
> I think this is not a big issue, but we should fix this in order to avoid 
> confusion.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6047) Documentation updates for TimelineService v2

2017-08-15 Thread Vrushali C (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6047?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vrushali C updated YARN-6047:
-
Fix Version/s: YARN-5355

> Documentation updates for TimelineService v2
> 
>
> Key: YARN-6047
> URL: https://issues.apache.org/jira/browse/YARN-6047
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: documentation, timelineserver
>Reporter: Varun Saxena
>Assignee: Rohith Sharma K S
>  Labels: yarn-5355-merge-blocker
> Fix For: YARN-5355
>
> Attachments: YARN-6047-YARN-5355.001.patch, 
> YARN-6047-YARN-5355.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-3053) [Security] Review and implement authentication in ATS v.2

2017-08-15 Thread Vrushali C (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3053?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vrushali C updated YARN-3053:
-
Fix Version/s: YARN-5355

> [Security] Review and implement authentication in ATS v.2
> -
>
> Key: YARN-3053
> URL: https://issues.apache.org/jira/browse/YARN-3053
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Sangjin Lee
>Assignee: Varun Saxena
>  Labels: YARN-5355, yarn-5355-merge-blocker
> Fix For: YARN-5355
>
> Attachments: ATSv2Authentication(draft).pdf, 
> ATSv2Authentication.v01.pdf, ATSv2Authentication.v02.pdf
>
>
> Per design in YARN-2928, we want to evaluate and review the system for 
> security, and ensure proper security in the system.
> This includes proper authentication, token management, access control, and 
> any other relevant security aspects.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6959) RM may allocate wrong AM Container for new attempt

2017-08-15 Thread Yuqi Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuqi Wang updated YARN-6959:

Attachment: YARN-6959-branch-2.7.003.patch

> RM may allocate wrong AM Container for new attempt
> --
>
> Key: YARN-6959
> URL: https://issues.apache.org/jira/browse/YARN-6959
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler, fairscheduler, scheduler
>Affects Versions: 2.7.1
>Reporter: Yuqi Wang
>Assignee: Yuqi Wang
>  Labels: patch
> Fix For: 2.8.0, 2.7.1, 3.0.0-alpha4
>
> Attachments: YARN-6959.001.patch, YARN-6959.002.patch, 
> YARN-6959.003.patch, YARN-6959.004.patch, YARN-6959.005.patch, 
> YARN-6959-branch-2.7.001.patch, YARN-6959-branch-2.7.002.patch, 
> YARN-6959-branch-2.7.003.patch, YARN-6959-branch-2.8.001.patch, 
> YARN-6959.yarn_nm.log.zip, YARN-6959.yarn_rm.log.zip
>
>
> *Issue Summary:*
> Previous attempt ResourceRequest may be recorded into current attempt 
> ResourceRequests. These mis-recorded ResourceRequests may confuse AM 
> Container Request and Allocation for current attempt.
> *Issue Pipeline:*
> {code:java}
> // Executing precondition check for the incoming attempt id.
> ApplicationMasterService.allocate() ->
> scheduler.allocate(attemptId, ask, ...) ->
> // Previous precondition check for the attempt id may be outdated here, 
> // i.e. the currentAttempt may not be the corresponding attempt of the 
> attemptId.
> // Such as the attempt id is corresponding to the previous attempt.
> currentAttempt = scheduler.getApplicationAttempt(attemptId) ->
> // Previous attempt ResourceRequest may be recorded into current attempt 
> ResourceRequests
> currentAttempt.updateResourceRequests(ask) ->
> // RM may allocate wrong AM Container for the current attempt, because its 
> ResourceRequests
> // may come from previous attempt which can be any ResourceRequests previous 
> AM asked
> // and there is not matching logic for the original AM Container 
> ResourceRequest and 
> // the returned amContainerAllocation below.
> AMContainerAllocatedTransition.transition(...) ->
> amContainerAllocation = scheduler.allocate(currentAttemptId, ...)
> {code}
> *Patch Correctness:*
> Because after this Patch, RM will definitely record ResourceRequests from 
> different attempt into different objects of 
> SchedulerApplicationAttempt.AppSchedulingInfo.
> So, even if RM still record ResourceRequests from old attempt at any time, 
> these ResourceRequests will be recorded in old AppSchedulingInfo object which 
> will not impact current attempt's resource requests and allocation.
> *Concerns:*
> The getApplicationAttempt function in AbstractYarnScheduler is so confusing, 
> we should better rename it to getCurrentApplicationAttempt. And reconsider 
> whether there are any other bugs related to getApplicationAttempt.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6741) Deleting all children of a Parent Queue on refresh throws exception

2017-08-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6741?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16128244#comment-16128244
 ] 

Hadoop QA commented on YARN-6741:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 12m 
47s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
32s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
39s{color} | {color:green} branch-2 passed with JDK v1.8.0_144 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
40s{color} | {color:green} branch-2 passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
31s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
48s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
32s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
29s{color} | {color:green} branch-2 passed with JDK v1.8.0_144 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} branch-2 passed with JDK v1.7.0_131 {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed with JDK v1.8.0_144 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 28s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 2 new + 224 unchanged - 2 fixed = 226 total (was 226) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed with JDK v1.8.0_144 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 45m 26s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK 
v1.7.0_131. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}128m 30s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_144 Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerAllocation |
| JDK v1.7.0_131 Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerAllocation |
|   | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacityScheduler |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:5e40efe |
| JIRA Issue | YARN-6741 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12882060/YARN-6741-branch-2.005.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 

[jira] [Commented] (YARN-6992) "Kill application" button is present even if the application is FINISHED in RM UI

2017-08-15 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16128235#comment-16128235
 ] 

Sunil G commented on YARN-6992:
---

I think in align to RMAppImpl#isAppInCompletedStates, we could add below states 
as well. Thoughts?
{code}
  public boolean isAppInCompletedStates() {
RMAppState appState = getState();
return appState == RMAppState.FINISHED || appState == RMAppState.FINISHING
|| appState == RMAppState.FAILED || appState == RMAppState.KILLED
|| appState == RMAppState.FINAL_SAVING
|| appState == RMAppState.KILLING;
  }
{code}


> "Kill application" button is present even if the application is FINISHED in 
> RM UI
> -
>
> Key: YARN-6992
> URL: https://issues.apache.org/jira/browse/YARN-6992
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Sumana Sathish
>Assignee: Suma Shivaprasad
> Attachments: YARN-6992.001.patch
>
>
> Kill button should not be displayed for FAILED, KILLED and FINISHED apps in 
> Application specific landing page



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6781) ResourceUtils.initializeResourcesMap() takes an unnecessary Map parameter

2017-08-15 Thread Yu-Tang Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16128200#comment-16128200
 ] 

Yu-Tang Lin commented on YARN-6781:
---

Hi,[#Daniel Templeton ], looks like jenkins is OK now, please check!

> ResourceUtils.initializeResourcesMap() takes an unnecessary Map parameter
> -
>
> Key: YARN-6781
> URL: https://issues.apache.org/jira/browse/YARN-6781
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Affects Versions: YARN-3926
>Reporter: Daniel Templeton
>Assignee: Yu-Tang Lin
>Priority: Minor
>  Labels: newbie
> Fix For: YARN-3926
>
> Attachments: YARN-6781.001.patch, YARN-6781-YARN-3926.002.patch
>
>
> The {{resourceInformationMap}} parameter is always passed in as a new {{Map}} 
> object, and it's never referenced again after the call.  The parameter can be 
> eliminated.  Instead the {{Map}} can be created inside the 
> {{initializeResourcesMap()}} method.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6900) ZooKeeper based implementation of the FederationStateStore

2017-08-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16128199#comment-16128199
 ] 

Hadoop QA commented on YARN-6900:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
43s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
 5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
27s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
6s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
41s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 58s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 1 new + 214 unchanged - 0 fixed = 215 total (was 214) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
33s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
44s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
15s{color} | {color:green} hadoop-yarn-site in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 56m 36s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6900 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12882057/YARN-6900-011.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux 2378d156def2 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | 

[jira] [Commented] (YARN-6595) [API] Add Placement Constraints at the application level

2017-08-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16128195#comment-16128195
 ] 

Hadoop QA commented on YARN-6595:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-6592 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m 
43s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
37s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
48s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 2s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
21s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
37s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
9s{color} | {color:green} YARN-6592 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  6m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m  
5s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 56s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 12 new + 41 unchanged - 0 fixed = 53 total (was 41) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
43s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
51s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 63m 25s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6595 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12882053/YARN-6595-YARN-6592.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  cc  |
| uname | Linux c41aa91dce0c 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | YARN-6592 / 7d5bd3e |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/16922/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/16922/testReport/ |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common U: 

[jira] [Updated] (YARN-6741) Deleting all children of a Parent Queue on refresh throws exception

2017-08-15 Thread Naganarasimha G R (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naganarasimha G R updated YARN-6741:

Attachment: YARN-6741-branch-2.005.patch

> Deleting all children of a Parent Queue on refresh throws exception
> ---
>
> Key: YARN-6741
> URL: https://issues.apache.org/jira/browse/YARN-6741
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Affects Versions: 3.0.0-alpha3
>Reporter: Naganarasimha G R
>Assignee: Naganarasimha G R
> Attachments: YARN-6741.001.patch, YARN-6741.002.patch, 
> YARN-6741.003.patch, YARN-6741.004.patch, YARN-6741.005.patch, 
> YARN-6741-branch-2.005.patch
>
>
> If we configure CS such that all  children of a parent queue are deleted and 
> made as a leaf queue, then {{refreshQueue}} operation fails when 
> re-initializing the parent Queue
> {code}
>// Sanity check
>   if (!(newlyParsedQueue instanceof ParentQueue) || !newlyParsedQueue
>   .getQueuePath().equals(getQueuePath())) {
> throw new IOException(
> "Trying to reinitialize " + getQueuePath() + " from "
> + newlyParsedQueue.getQueuePath());
>   }
> {code}
> *Expected Behavior:*
> Converting a Parent Queue to leafQueue on refreshQueue needs to be supported.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6741) Deleting all children of a Parent Queue on refresh throws exception

2017-08-15 Thread Naganarasimha G R (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naganarasimha G R updated YARN-6741:

Attachment: (was: YARN-6745-branch-2.005.patch)

> Deleting all children of a Parent Queue on refresh throws exception
> ---
>
> Key: YARN-6741
> URL: https://issues.apache.org/jira/browse/YARN-6741
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Affects Versions: 3.0.0-alpha3
>Reporter: Naganarasimha G R
>Assignee: Naganarasimha G R
> Attachments: YARN-6741.001.patch, YARN-6741.002.patch, 
> YARN-6741.003.patch, YARN-6741.004.patch, YARN-6741.005.patch, 
> YARN-6741-branch-2.005.patch
>
>
> If we configure CS such that all  children of a parent queue are deleted and 
> made as a leaf queue, then {{refreshQueue}} operation fails when 
> re-initializing the parent Queue
> {code}
>// Sanity check
>   if (!(newlyParsedQueue instanceof ParentQueue) || !newlyParsedQueue
>   .getQueuePath().equals(getQueuePath())) {
> throw new IOException(
> "Trying to reinitialize " + getQueuePath() + " from "
> + newlyParsedQueue.getQueuePath());
>   }
> {code}
> *Expected Behavior:*
> Converting a Parent Queue to leafQueue on refreshQueue needs to be supported.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6741) Deleting all children of a Parent Queue on refresh throws exception

2017-08-15 Thread Naganarasimha G R (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naganarasimha G R updated YARN-6741:

Attachment: YARN-6745-branch-2.005.patch

Thanks [~bibinchundatt],
Have uploaded a patch for branch-2. Please review the same.

> Deleting all children of a Parent Queue on refresh throws exception
> ---
>
> Key: YARN-6741
> URL: https://issues.apache.org/jira/browse/YARN-6741
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Affects Versions: 3.0.0-alpha3
>Reporter: Naganarasimha G R
>Assignee: Naganarasimha G R
> Attachments: YARN-6741.001.patch, YARN-6741.002.patch, 
> YARN-6741.003.patch, YARN-6741.004.patch, YARN-6741.005.patch, 
> YARN-6745-branch-2.005.patch
>
>
> If we configure CS such that all  children of a parent queue are deleted and 
> made as a leaf queue, then {{refreshQueue}} operation fails when 
> re-initializing the parent Queue
> {code}
>// Sanity check
>   if (!(newlyParsedQueue instanceof ParentQueue) || !newlyParsedQueue
>   .getQueuePath().equals(getQueuePath())) {
> throw new IOException(
> "Trying to reinitialize " + getQueuePath() + " from "
> + newlyParsedQueue.getQueuePath());
>   }
> {code}
> *Expected Behavior:*
> Converting a Parent Queue to leafQueue on refreshQueue needs to be supported.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6900) ZooKeeper based implementation of the FederationStateStore

2017-08-15 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/YARN-6900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated YARN-6900:
--
Attachment: YARN-6900-011.patch

> ZooKeeper based implementation of the FederationStateStore
> --
>
> Key: YARN-6900
> URL: https://issues.apache.org/jira/browse/YARN-6900
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: federation, nodemanager, resourcemanager
>Reporter: Subru Krishnan
>Assignee: Íñigo Goiri
> Attachments: YARN-6900-002.patch, YARN-6900-003.patch, 
> YARN-6900-004.patch, YARN-6900-005.patch, YARN-6900-006.patch, 
> YARN-6900-007.patch, YARN-6900-008.patch, YARN-6900-009.patch, 
> YARN-6900-010.patch, YARN-6900-011.patch, YARN-6900-YARN-2915-000.patch, 
> YARN-6900-YARN-2915-001.patch
>
>
> YARN-5408 defines the unified {{FederationStateStore}} API. Currently we only 
> support SQL based stores, this JIRA tracks adding a ZooKeeper based 
> implementation for simplifying deployment as it's already popularly used for 
> {{RMStateStore}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5764) NUMA awareness support for launching containers

2017-08-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16128152#comment-16128152
 ] 

Hadoop QA commented on YARN-5764:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
19s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
5s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 in trunk has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
52s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
17s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 16s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 3 new + 345 unchanged - 0 fixed = 348 total (was 345) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
45s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 44s{color} 
| {color:red} hadoop-yarn-api in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
55s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 14m 
27s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 81m 49s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.conf.TestYarnConfigurationFields |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-5764 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12882039/YARN-5764-v3.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux 265301fc3a3f 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / f34646d |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| findbugs | 

[jira] [Updated] (YARN-6595) [API] Add Placement Constraints at the application level

2017-08-15 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6595?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-6595:
--
Attachment: YARN-6595-YARN-6592.001.patch

Attaching initial patch to allow placement constraints to be specified in the 
{{RegisterApplicationMasterRequest}}

> [API] Add Placement Constraints at the application level
> 
>
> Key: YARN-6595
> URL: https://issues.apache.org/jira/browse/YARN-6595
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Konstantinos Karanasos
>Assignee: Arun Suresh
> Attachments: YARN-6595-YARN-6592.001.patch
>
>
> This JIRA allows placement constraints to be specified at the application 
> level.
> This will be used for placement constraints between different components of 
> the application.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6610) DominantResourceCalculator.getResourceAsValue() dominant param is no longer appropriate

2017-08-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16128139#comment-16128139
 ] 

Hadoop QA commented on YARN-6610:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-3926 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
 3s{color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
32s{color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
20s{color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
7s{color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} | {color:green} YARN-3926 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} | {color:green} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common 
generated 0 new + 4569 unchanged - 5 fixed = 4569 total (was 4574) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
35s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 24m  2s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6610 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12882050/YARN-6610.YARN-3926.perf.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux de412fc06846 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | YARN-3926 / 8f80907 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/16921/testReport/ |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/16921/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> DominantResourceCalculator.getResourceAsValue() dominant param is no longer 
> appropriate
> ---
>
> Key: YARN-6610
> URL: https://issues.apache.org/jira/browse/YARN-6610
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Affects Versions: YARN-3926
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>

[jira] [Commented] (YARN-7020) TestAMRMProxy#testAMRMProxyTokenRenewal is flakey

2017-08-15 Thread Robert Kanter (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16128141#comment-16128141
 ] 

Robert Kanter commented on YARN-7020:
-

The test failure is unrelated (YARN-6272).

> TestAMRMProxy#testAMRMProxyTokenRenewal is flakey
> -
>
> Key: YARN-7020
> URL: https://issues.apache.org/jira/browse/YARN-7020
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.0.0-beta1
>Reporter: Robert Kanter
>Assignee: Robert Kanter
> Attachments: YARN-7020.001.patch
>
>
> {{TestAMRMProxy#testAMRMProxyTokenRenewal}} is flakey.  It infrequently fails 
> with:
> {noformat}
> testAMRMProxyTokenRenewal(org.apache.hadoop.yarn.client.api.impl.TestAMRMProxy)
>   Time elapsed: 19.036 sec  <<< ERROR!
> org.apache.hadoop.yarn.exceptions.ApplicationAttemptNotFoundException: 
> Application attempt appattempt_1502837054903_0001_01 doesn't exist in 
> ApplicationMasterService cache.
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService.allocate(ApplicationMasterService.java:355)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.amrmproxy.DefaultRequestInterceptor$3.allocate(DefaultRequestInterceptor.java:224)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.amrmproxy.DefaultRequestInterceptor.allocate(DefaultRequestInterceptor.java:135)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.amrmproxy.AMRMProxyService.allocate(AMRMProxyService.java:279)
>   at 
> org.apache.hadoop.yarn.api.impl.pb.service.ApplicationMasterProtocolPBServiceImpl.allocate(ApplicationMasterProtocolPBServiceImpl.java:60)
>   at 
> org.apache.hadoop.yarn.proto.ApplicationMasterProtocol$ApplicationMasterProtocolService$2.callBlockingMethod(ApplicationMasterProtocol.java:99)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:869)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:815)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1962)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2675)
>   at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1490)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1436)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1346)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
>   at com.sun.proxy.$Proxy90.allocate(Unknown Source)
>   at 
> org.apache.hadoop.yarn.api.impl.pb.client.ApplicationMasterProtocolPBClientImpl.allocate(ApplicationMasterProtocolPBClientImpl.java:77)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
>   at com.sun.proxy.$Proxy91.allocate(Unknown Source)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TestAMRMProxy.testAMRMProxyTokenRenewal(TestAMRMProxy.java:190)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-6900) ZooKeeper based implementation of the FederationStateStore

2017-08-15 Thread Subru Krishnan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16128122#comment-16128122
 ] 

Subru Krishnan edited comment on YARN-6900 at 8/16/17 12:14 AM:


Thanks [~elgoiri] for updating the patch. The latest patch is pretty close, 
couple of minor comments:
* Can you rebase (_get/put/createRootDirRecursively_ from {{ZKCuratorManager}}) 
now that HADOOP-14773 is in.
* Instead of adding in *yarn-default.xml*, I prefer to exclude in 
{{TestYarnConfigurationFields}} as ZK store is not the default store anyways.


was (Author: subru):
Thanks [~elgoiri] for updating the patch. The latest patch is pretty close, 
couple of minor comments:
* Can you rebase now that HADOOP-14773 is in.
* Instead of adding in *yarn-default.xml*, I prefer to exclude in 
{{TestYarnConfigurationFields}} as ZK store is not the default store anyways.

> ZooKeeper based implementation of the FederationStateStore
> --
>
> Key: YARN-6900
> URL: https://issues.apache.org/jira/browse/YARN-6900
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: federation, nodemanager, resourcemanager
>Reporter: Subru Krishnan
>Assignee: Íñigo Goiri
> Attachments: YARN-6900-002.patch, YARN-6900-003.patch, 
> YARN-6900-004.patch, YARN-6900-005.patch, YARN-6900-006.patch, 
> YARN-6900-007.patch, YARN-6900-008.patch, YARN-6900-009.patch, 
> YARN-6900-010.patch, YARN-6900-YARN-2915-000.patch, 
> YARN-6900-YARN-2915-001.patch
>
>
> YARN-5408 defines the unified {{FederationStateStore}} API. Currently we only 
> support SQL based stores, this JIRA tracks adding a ZooKeeper based 
> implementation for simplifying deployment as it's already popularly used for 
> {{RMStateStore}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7011) yarn-daemon.sh is not respecting --config option

2017-08-15 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16128124#comment-16128124
 ] 

Allen Wittenauer commented on YARN-7011:


Also, please run the classpath command instead of resourcemanager.

> yarn-daemon.sh is not respecting --config option
> 
>
> Key: YARN-7011
> URL: https://issues.apache.org/jira/browse/YARN-7011
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Reporter: Sumana Sathish
>Priority: Blocker
> Fix For: 3.0.0-beta1
>
>
> Steps to reproduce:
> 1. Copy the conf to a temporary location /tmp/Conf
> 2. Modify anything in yarn-site.xml under /tmp/Conf/. Ex: Give invalid RM 
> address
> 3. Restart the resourcemanager using yarn-daemon.sh using --config /tmp/Conf
> 4. --config is not respected as the changes made in /tmp/Conf/yarn-site.xml 
> is not taken in while restarting RM



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6900) ZooKeeper based implementation of the FederationStateStore

2017-08-15 Thread Subru Krishnan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16128122#comment-16128122
 ] 

Subru Krishnan commented on YARN-6900:
--

Thanks [~elgoiri] for updating the patch. The latest patch is pretty close, 
couple of minor comments:
* Can you rebase now that HADOOP-14773 is in.
* Instead of adding in *yarn-default.xml*, I prefer to exclude in 
{{TestYarnConfigurationFields}} as ZK store is not the default store anyways.

> ZooKeeper based implementation of the FederationStateStore
> --
>
> Key: YARN-6900
> URL: https://issues.apache.org/jira/browse/YARN-6900
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: federation, nodemanager, resourcemanager
>Reporter: Subru Krishnan
>Assignee: Íñigo Goiri
> Attachments: YARN-6900-002.patch, YARN-6900-003.patch, 
> YARN-6900-004.patch, YARN-6900-005.patch, YARN-6900-006.patch, 
> YARN-6900-007.patch, YARN-6900-008.patch, YARN-6900-009.patch, 
> YARN-6900-010.patch, YARN-6900-YARN-2915-000.patch, 
> YARN-6900-YARN-2915-001.patch
>
>
> YARN-5408 defines the unified {{FederationStateStore}} API. Currently we only 
> support SQL based stores, this JIRA tracks adding a ZooKeeper based 
> implementation for simplifying deployment as it's already popularly used for 
> {{RMStateStore}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7020) TestAMRMProxy#testAMRMProxyTokenRenewal is flakey

2017-08-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16128120#comment-16128120
 ] 

Hadoop QA commented on YARN-7020:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 21m 20s{color} 
| {color:red} hadoop-yarn-client in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 43m  3s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.client.api.impl.TestAMRMClient |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-7020 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12882042/YARN-7020.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 862dbbe5fce6 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / f34646d |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/16920/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/16920/testReport/ |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/16920/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> TestAMRMProxy#testAMRMProxyTokenRenewal is flakey
> -
>
> Key: YARN-7020
> URL: https://issues.apache.org/jira/browse/YARN-7020
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.0.0-beta1
>Reporter: Robert Kanter
>Assignee: Robert Kanter
> Attachments: 

[jira] [Updated] (YARN-6610) DominantResourceCalculator.getResourceAsValue() dominant param is no longer appropriate

2017-08-15 Thread Daniel Templeton (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton updated YARN-6610:
---
Attachment: YARN-6610.YARN-3926.perf.patch

Here's a patch that adds a special case for # res == 2.  [~sunilg], any chance 
you can run it through SLS and see what the difference is?

> DominantResourceCalculator.getResourceAsValue() dominant param is no longer 
> appropriate
> ---
>
> Key: YARN-6610
> URL: https://issues.apache.org/jira/browse/YARN-6610
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Affects Versions: YARN-3926
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Critical
> Attachments: YARN-6610.001.patch, YARN-6610.YARN-3926.002.patch, 
> YARN-6610.YARN-3926.003.patch, YARN-6610.YARN-3926.004.patch, 
> YARN-6610.YARN-3926.005.patch, YARN-6610.YARN-3926.006.patch, 
> YARN-6610.YARN-3926.perf.patch
>
>
> The {{dominant}} param assumes there are only two resources, i.e. true means 
> to compare the dominant, and false means to compare the subordinate.  Now 
> that there are _n_ resources, this parameter no longer makes sense.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7011) yarn-daemon.sh is not respecting --config option

2017-08-15 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16128105#comment-16128105
 ] 

Allen Wittenauer commented on YARN-7011:


What happens if you run {{yarn --config foo --daemon start --debug 
resourcemanager}}?

Are you setting HADOOP_CONF_DIR in one of the -env.sh files?

> yarn-daemon.sh is not respecting --config option
> 
>
> Key: YARN-7011
> URL: https://issues.apache.org/jira/browse/YARN-7011
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Reporter: Sumana Sathish
>Priority: Blocker
> Fix For: 3.0.0-beta1
>
>
> Steps to reproduce:
> 1. Copy the conf to a temporary location /tmp/Conf
> 2. Modify anything in yarn-site.xml under /tmp/Conf/. Ex: Give invalid RM 
> address
> 3. Restart the resourcemanager using yarn-daemon.sh using --config /tmp/Conf
> 4. --config is not respected as the changes made in /tmp/Conf/yarn-site.xml 
> is not taken in while restarting RM



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5764) NUMA awareness support for launching containers

2017-08-15 Thread Devaraj K (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16128107#comment-16128107
 ] 

Devaraj K commented on YARN-5764:
-

Thanks [~leftnoteasy] for looking into the patch and for the suggestions, will 
update the patch with the suggestions.

> NUMA awareness support for launching containers
> ---
>
> Key: YARN-5764
> URL: https://issues.apache.org/jira/browse/YARN-5764
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: nodemanager, yarn
>Reporter: Olasoji
>Assignee: Devaraj K
> Attachments: NUMA Awareness for YARN Containers.pdf, NUMA Performance 
> Results.pdf, YARN-5764-v0.patch, YARN-5764-v1.patch, YARN-5764-v2.patch, 
> YARN-5764-v3.patch
>
>
> The purpose of this feature is to improve Hadoop performance by minimizing 
> costly remote memory accesses on non SMP systems. Yarn containers, on launch, 
> will be pinned to a specific NUMA node and all subsequent memory allocations 
> will be served by the same node, reducing remote memory accesses. The current 
> default behavior is to spread memory across all NUMA nodes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6610) DominantResourceCalculator.getResourceAsValue() dominant param is no longer appropriate

2017-08-15 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16128098#comment-16128098
 ] 

Wangda Tan commented on YARN-6610:
--

[~templedf], I just checked, in Java, Arrays.sort uses dual pivot quicksort, in 
implementation, it actually use insertion sort for tiny arrays. (See 
http://codeblab.com/wp-content/uploads/2009/09/DualPivotQuicksort.pdf).

So I think the sort may not cause significant performance regression comparing 
to the original approach.

If any regression happens, I would suggest to take a look at the additional 
array allocation as well (even if JVM manages memory allocation pretty fast). 
Probably we can use ThreadLocal static arrays to hold resource arrays instead 
of allocating arrays every time.

> DominantResourceCalculator.getResourceAsValue() dominant param is no longer 
> appropriate
> ---
>
> Key: YARN-6610
> URL: https://issues.apache.org/jira/browse/YARN-6610
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Affects Versions: YARN-3926
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Critical
> Attachments: YARN-6610.001.patch, YARN-6610.YARN-3926.002.patch, 
> YARN-6610.YARN-3926.003.patch, YARN-6610.YARN-3926.004.patch, 
> YARN-6610.YARN-3926.005.patch, YARN-6610.YARN-3926.006.patch
>
>
> The {{dominant}} param assumes there are only two resources, i.e. true means 
> to compare the dominant, and false means to compare the subordinate.  Now 
> that there are _n_ resources, this parameter no longer makes sense.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6610) DominantResourceCalculator.getResourceAsValue() dominant param is no longer appropriate

2017-08-15 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16128077#comment-16128077
 ] 

Daniel Templeton commented on YARN-6610:


Looking at the diff, before this patch, the code does 2 or 4 passes through the 
resources.  I suspect 4 is the common case.  After this patch, the code does 
only 2 passes, but adds two sorts.  If you're seeing a notable performance 
difference, I would suspect the sorts.  If we want to optimize for the case of 
only CPU and memory, we can add another code path to {{DRC#compare()}} for it.

> DominantResourceCalculator.getResourceAsValue() dominant param is no longer 
> appropriate
> ---
>
> Key: YARN-6610
> URL: https://issues.apache.org/jira/browse/YARN-6610
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Affects Versions: YARN-3926
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Critical
> Attachments: YARN-6610.001.patch, YARN-6610.YARN-3926.002.patch, 
> YARN-6610.YARN-3926.003.patch, YARN-6610.YARN-3926.004.patch, 
> YARN-6610.YARN-3926.005.patch, YARN-6610.YARN-3926.006.patch
>
>
> The {{dominant}} param assumes there are only two resources, i.e. true means 
> to compare the dominant, and false means to compare the subordinate.  Now 
> that there are _n_ resources, this parameter no longer makes sense.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6610) DominantResourceCalculator.getResourceAsValue() dominant param is no longer appropriate

2017-08-15 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16128068#comment-16128068
 ] 

Wangda Tan commented on YARN-6610:
--

[~templedf],

bq. My previous comment about performance was assuming that you're testing with 
more than 2 resources. If you're testing with only 2 resources, then I'd be 
surprised to see much of a difference. If that's the case, I can take a closer 
look.
Actually before YARN-6788, trunk is 2x faster than the branch (by using perf 
test added by YARN-6775) with 2 resources, the most expensive piece is 
Collection operations (such as Set look up) and unnecessary 
box/unboxing/instance-initialization. So I think we need to be more careful 
here, new feature is good we can optimize performance of the new feature 
slowly, however we should avoid any obvious performance regression for users 
which are not using the feature.

> DominantResourceCalculator.getResourceAsValue() dominant param is no longer 
> appropriate
> ---
>
> Key: YARN-6610
> URL: https://issues.apache.org/jira/browse/YARN-6610
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Affects Versions: YARN-3926
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Critical
> Attachments: YARN-6610.001.patch, YARN-6610.YARN-3926.002.patch, 
> YARN-6610.YARN-3926.003.patch, YARN-6610.YARN-3926.004.patch, 
> YARN-6610.YARN-3926.005.patch, YARN-6610.YARN-3926.006.patch
>
>
> The {{dominant}} param assumes there are only two resources, i.e. true means 
> to compare the dominant, and false means to compare the subordinate.  Now 
> that there are _n_ resources, this parameter no longer makes sense.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6892) Improve API implementation in Resources and DominantResourceCalculator in align to ResourceInformation

2017-08-15 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16128060#comment-16128060
 ] 

Wangda Tan commented on YARN-6892:
--

Will commit this patch tomorrow if nobody against.

> Improve API implementation in Resources and DominantResourceCalculator in 
> align to ResourceInformation
> --
>
> Key: YARN-6892
> URL: https://issues.apache.org/jira/browse/YARN-6892
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Sunil G
>Assignee: Sunil G
> Attachments: YARN-6892-YARN-3926.001.patch, 
> YARN-6892-YARN-3926.002.patch, YARN-6892-YARN-3926.003.patch, 
> YARN-6892-YARN-3926.004.patch
>
>
> In YARN-3926, apis in Resources and DRC spents significant cpu cycles in most 
> of its api. For better performance, its better to improve the apis as 
> resource types order is defined in system level (ResourceUtils class ensures 
> this post YARN-6788)
> This work is preceding to YARN-6788



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5764) NUMA awareness support for launching containers

2017-08-15 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16128055#comment-16128055
 ] 

Wangda Tan commented on YARN-5764:
--

[~devaraj.k], 

Thanks for updating the patch, I checked the latest patch implementation. Some 
suggestions:

1) It added numa controller for both default container executor and linux 
container executor, does it make sense to use this feature under default 
container executor since CPU asks might be ignored in RM side (so asking 100 
vcores is same as asking 1 vcores).

2) If we don't have to add support of DefaultContainerExecutor, probably we can 
leverage the latest ResourceHandlerModule, with that we can easier plug the 
numa related logics.

3) It seems this patch doesn't handle NM restart recovery. I think we need to 
recover what allocated by NM.

Probably you can take a look at approach of 
https://issues.apache.org/jira/browse/YARN-6620, and some common libraries 
added in YARN-6620 (such as NM resource recovery) could be used to implement 
this feature.

+ [~shaneku...@gmail.com].

> NUMA awareness support for launching containers
> ---
>
> Key: YARN-5764
> URL: https://issues.apache.org/jira/browse/YARN-5764
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: nodemanager, yarn
>Reporter: Olasoji
>Assignee: Devaraj K
> Attachments: NUMA Awareness for YARN Containers.pdf, NUMA Performance 
> Results.pdf, YARN-5764-v0.patch, YARN-5764-v1.patch, YARN-5764-v2.patch, 
> YARN-5764-v3.patch
>
>
> The purpose of this feature is to improve Hadoop performance by minimizing 
> costly remote memory accesses on non SMP systems. Yarn containers, on launch, 
> will be pinned to a specific NUMA node and all subsequent memory allocations 
> will be served by the same node, reducing remote memory accesses. The current 
> default behavior is to spread memory across all NUMA nodes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7020) TestAMRMProxy#testAMRMProxyTokenRenewal is flakey

2017-08-15 Thread Robert Kanter (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7020?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Kanter updated YARN-7020:

Attachment: YARN-7020.001.patch

This is due to a timing issue.  The test sets a number of configs to 1.5 second 
intervals, including {{yarn.am.liveness-monitor.expiry-interval-ms}}.  And when 
the expired event happens in {{RMAppAttemptImpl}}, it removes the app attempt 
from the cache; then if the {{ApplicationMasterService}} tries to read it from 
the cache afterwards, it can't find it and you get the error.  

I'm open to ideas on how to remove the timing element to this test, but for now 
I've upped the numbers to make it more reliable.  In my testing, the original 
values could only accommodate a 1 second delay in 
{{ApplicationMasterService#allocate}}, but with my changes, it can accommodate 
a 4 second delay.  This makes the test much more reliable.

> TestAMRMProxy#testAMRMProxyTokenRenewal is flakey
> -
>
> Key: YARN-7020
> URL: https://issues.apache.org/jira/browse/YARN-7020
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.0.0-beta1
>Reporter: Robert Kanter
>Assignee: Robert Kanter
> Attachments: YARN-7020.001.patch
>
>
> {{TestAMRMProxy#testAMRMProxyTokenRenewal}} is flakey.  It infrequently fails 
> with:
> {noformat}
> testAMRMProxyTokenRenewal(org.apache.hadoop.yarn.client.api.impl.TestAMRMProxy)
>   Time elapsed: 19.036 sec  <<< ERROR!
> org.apache.hadoop.yarn.exceptions.ApplicationAttemptNotFoundException: 
> Application attempt appattempt_1502837054903_0001_01 doesn't exist in 
> ApplicationMasterService cache.
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService.allocate(ApplicationMasterService.java:355)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.amrmproxy.DefaultRequestInterceptor$3.allocate(DefaultRequestInterceptor.java:224)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.amrmproxy.DefaultRequestInterceptor.allocate(DefaultRequestInterceptor.java:135)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.amrmproxy.AMRMProxyService.allocate(AMRMProxyService.java:279)
>   at 
> org.apache.hadoop.yarn.api.impl.pb.service.ApplicationMasterProtocolPBServiceImpl.allocate(ApplicationMasterProtocolPBServiceImpl.java:60)
>   at 
> org.apache.hadoop.yarn.proto.ApplicationMasterProtocol$ApplicationMasterProtocolService$2.callBlockingMethod(ApplicationMasterProtocol.java:99)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:869)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:815)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1962)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2675)
>   at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1490)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1436)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1346)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
>   at com.sun.proxy.$Proxy90.allocate(Unknown Source)
>   at 
> org.apache.hadoop.yarn.api.impl.pb.client.ApplicationMasterProtocolPBClientImpl.allocate(ApplicationMasterProtocolPBClientImpl.java:77)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
>   at com.sun.proxy.$Proxy91.allocate(Unknown Source)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TestAMRMProxy.testAMRMProxyTokenRenewal(TestAMRMProxy.java:190)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (YARN-6892) Improve API implementation in Resources and DominantResourceCalculator in align to ResourceInformation

2017-08-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16128041#comment-16128041
 ] 

Hadoop QA commented on YARN-6892:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} YARN-3926 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
44s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
 8s{color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
13s{color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
48s{color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
8s{color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
13s{color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} YARN-3926 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
52s{color} | {color:green} hadoop-yarn-project/hadoop-yarn: The patch generated 
0 new + 12 unchanged - 1 fixed = 12 total (was 13) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
41s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
13s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 55m 13s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6892 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12882025/YARN-6892-YARN-3926.004.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 00c1a2935f34 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | YARN-3926 / 8f80907 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/16917/testReport/ |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common U: 
hadoop-yarn-project/hadoop-yarn |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/16917/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   

[jira] [Comment Edited] (YARN-7011) yarn-daemon.sh is not respecting --config option

2017-08-15 Thread Sumana Sathish (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16128024#comment-16128024
 ] 

Sumana Sathish edited comment on YARN-7011 at 8/15/17 11:08 PM:


Hi [~aw],

I could see "DEBUG: HADOOP_CONF_DIR=/tmp/hadoop" but not "DEBUG: Profiles: 
importing /tmp/hadoop/". But still i couldnt see the required config change in 
RM after the start of it
{code}
 sudo su - -c "hadoopInstall/hadoop-yarn-resourcemanager/sbin/yarn-daemon.sh 
--config /tmp/hadoopConf --debug start resourcemanager" yarn | grep 
"/tmp/hadoopConf"
DEBUG: hadoop_parse_args: processing start
DEBUG: hadoop_parse: asking caller to skip 3
DEBUG: HADOOP_CONF_DIR=/tmp/hadoopConf
DEBUG: shellprofiles: 
hadoopInstall/hadoop-client/libexec/shellprofile.d/hadoop-aliyun.sh 
hadoopInstall/hadoop-client/libexec/shellprofile.d/hadoop-archive-logs.sh 
hadoopInstall/hadoop-client/libexec/shellprofile.d/hadoop-archives.sh 
hadoopInstall/hadoop-client/libexec/shellprofile.d/hadoop-aws.sh 
hadoopInstall/hadoop-client/libexec/shellprofile.d/hadoop-azure-datalake.sh 
hadoopInstall/hadoop-client/libexec/shellprofile.d/hadoop-azure.sh 
hadoopInstall/hadoop-client/libexec/shellprofile.d/hadoop-distcp.sh 
hadoopInstall/hadoop-client/libexec/shellprofile.d/hadoop-extras.sh 
hadoopInstall/hadoop-client/libexec/shellprofile.d/hadoop-gridmix.sh 
hadoopInstall/hadoop-client/libexec/shellprofile.d/hadoop-hdfs.sh 
hadoopInstall/hadoop-client/libexec/shellprofile.d/hadoop-httpfs.sh 
hadoopInstall/hadoop-client/libexec/shellprofile.d/hadoop-kafka.sh 
hadoopInstall/hadoop-client/libexec/shellprofile.d/hadoop-kms.sh 
hadoopInstall/hadoop-client/libexec/shellprofile.d/hadoop-mapreduce.sh 
hadoopInstall/hadoop-client/libexec/shellprofile.d/hadoop-openstack.sh 
hadoopInstall/hadoop-client/libexec/shellprofile.d/hadoop-rumen.sh 
hadoopInstall/hadoop-client/libexec/shellprofile.d/hadoop-streaming.sh 
hadoopInstall/hadoop-client/libexec/shellprofile.d/hadoop-yarn.sh
DEBUG: Profiles: importing 
hadoopInstall/hadoop-client/libexec/shellprofile.d/hadoop-aliyun.sh
DEBUG: Profiles: importing 
hadoopInstall/hadoop-client/libexec/shellprofile.d/hadoop-archive-logs.sh
DEBUG: Profiles: importing 
hadoopInstall/hadoop-client/libexec/shellprofile.d/hadoop-archives.sh
DEBUG: Profiles: importing 
hadoopInstall/hadoop-client/libexec/shellprofile.d/hadoop-aws.sh
DEBUG: Profiles: importing 
hadoopInstall/hadoop-client/libexec/shellprofile.d/hadoop-azure-datalake.sh
DEBUG: Profiles: importing 
hadoopInstall/hadoop-client/libexec/shellprofile.d/hadoop-azure.sh
DEBUG: Profiles: importing 
hadoopInstall/hadoop-client/libexec/shellprofile.d/hadoop-distcp.sh
DEBUG: Profiles: importing 
hadoopInstall/hadoop-client/libexec/shellprofile.d/hadoop-extras.sh
DEBUG: Profiles: importing 
hadoopInstall/hadoop-client/libexec/shellprofile.d/hadoop-gridmix.sh
DEBUG: Profiles: importing 
hadoopInstall/hadoop-client/libexec/shellprofile.d/hadoop-hdfs.sh
DEBUG: HADOOP_SHELL_PROFILES accepted hdfs
DEBUG: Profiles: importing 
hadoopInstall/hadoop-client/libexec/shellprofile.d/hadoop-httpfs.sh
DEBUG: Profiles: importing 
hadoopInstall/hadoop-client/libexec/shellprofile.d/hadoop-kafka.sh
DEBUG: Profiles: importing 
hadoopInstall/hadoop-client/libexec/shellprofile.d/hadoop-kms.sh
DEBUG: Profiles: importing 
hadoopInstall/hadoop-client/libexec/shellprofile.d/hadoop-mapreduce.sh
DEBUG: HADOOP_SHELL_PROFILES accepted mapred
DEBUG: Profiles: importing 
hadoopInstall/hadoop-client/libexec/shellprofile.d/hadoop-openstack.sh
DEBUG: Profiles: importing 
hadoopInstall/hadoop-client/libexec/shellprofile.d/hadoop-rumen.sh
DEBUG: Profiles: importing 
hadoopInstall/hadoop-client/libexec/shellprofile.d/hadoop-streaming.sh
DEBUG: Profiles: importing 
hadoopInstall/hadoop-client/libexec/shellprofile.d/hadoop-yarn.sh
{code}


was (Author: ssath...@hortonworks.com):
Hi [~aw],

I could see "DEBUG: HADOOP_CONF_DIR=/tmp/hadoop" but not "DEBUG: Profiles: 
importing /tmp/hadoop/". But still i cannot see the change in RM after the 
start of it
{code}
 sudo su - -c "hadoopInstall/hadoop-yarn-resourcemanager/sbin/yarn-daemon.sh 
--config /tmp/hadoopConf --debug start resourcemanager" yarn | grep 
"/tmp/hadoopConf"
DEBUG: hadoop_parse_args: processing start
DEBUG: hadoop_parse: asking caller to skip 3
DEBUG: HADOOP_CONF_DIR=/tmp/hadoopConf
DEBUG: shellprofiles: 
hadoopInstall/hadoop-client/libexec/shellprofile.d/hadoop-aliyun.sh 
hadoopInstall/hadoop-client/libexec/shellprofile.d/hadoop-archive-logs.sh 
hadoopInstall/hadoop-client/libexec/shellprofile.d/hadoop-archives.sh 
hadoopInstall/hadoop-client/libexec/shellprofile.d/hadoop-aws.sh 
hadoopInstall/hadoop-client/libexec/shellprofile.d/hadoop-azure-datalake.sh 
hadoopInstall/hadoop-client/libexec/shellprofile.d/hadoop-azure.sh 
hadoopInstall/hadoop-client/libexec/shellprofile.d/hadoop-distcp.sh 
hadoopInstall/hadoop-client/libexec/shellprofile.d/hadoop-extras.sh 

[jira] [Commented] (YARN-7011) yarn-daemon.sh is not respecting --config option

2017-08-15 Thread Sumana Sathish (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16128024#comment-16128024
 ] 

Sumana Sathish commented on YARN-7011:
--

Hi [~aw],

I could see "DEBUG: HADOOP_CONF_DIR=/tmp/hadoop" but not "DEBUG: Profiles: 
importing /tmp/hadoop/". But still i cannot see the change in RM after the 
start of it
{code}
 sudo su - -c "hadoopInstall/hadoop-yarn-resourcemanager/sbin/yarn-daemon.sh 
--config /tmp/hadoopConf --debug start resourcemanager" yarn | grep 
"/tmp/hadoopConf"
DEBUG: hadoop_parse_args: processing start
DEBUG: hadoop_parse: asking caller to skip 3
DEBUG: HADOOP_CONF_DIR=/tmp/hadoopConf
DEBUG: shellprofiles: 
hadoopInstall/hadoop-client/libexec/shellprofile.d/hadoop-aliyun.sh 
hadoopInstall/hadoop-client/libexec/shellprofile.d/hadoop-archive-logs.sh 
hadoopInstall/hadoop-client/libexec/shellprofile.d/hadoop-archives.sh 
hadoopInstall/hadoop-client/libexec/shellprofile.d/hadoop-aws.sh 
hadoopInstall/hadoop-client/libexec/shellprofile.d/hadoop-azure-datalake.sh 
hadoopInstall/hadoop-client/libexec/shellprofile.d/hadoop-azure.sh 
hadoopInstall/hadoop-client/libexec/shellprofile.d/hadoop-distcp.sh 
hadoopInstall/hadoop-client/libexec/shellprofile.d/hadoop-extras.sh 
hadoopInstall/hadoop-client/libexec/shellprofile.d/hadoop-gridmix.sh 
hadoopInstall/hadoop-client/libexec/shellprofile.d/hadoop-hdfs.sh 
hadoopInstall/hadoop-client/libexec/shellprofile.d/hadoop-httpfs.sh 
hadoopInstall/hadoop-client/libexec/shellprofile.d/hadoop-kafka.sh 
hadoopInstall/hadoop-client/libexec/shellprofile.d/hadoop-kms.sh 
hadoopInstall/hadoop-client/libexec/shellprofile.d/hadoop-mapreduce.sh 
hadoopInstall/hadoop-client/libexec/shellprofile.d/hadoop-openstack.sh 
hadoopInstall/hadoop-client/libexec/shellprofile.d/hadoop-rumen.sh 
hadoopInstall/hadoop-client/libexec/shellprofile.d/hadoop-streaming.sh 
hadoopInstall/hadoop-client/libexec/shellprofile.d/hadoop-yarn.sh
DEBUG: Profiles: importing 
hadoopInstall/hadoop-client/libexec/shellprofile.d/hadoop-aliyun.sh
DEBUG: Profiles: importing 
hadoopInstall/hadoop-client/libexec/shellprofile.d/hadoop-archive-logs.sh
DEBUG: Profiles: importing 
hadoopInstall/hadoop-client/libexec/shellprofile.d/hadoop-archives.sh
DEBUG: Profiles: importing 
hadoopInstall/hadoop-client/libexec/shellprofile.d/hadoop-aws.sh
DEBUG: Profiles: importing 
hadoopInstall/hadoop-client/libexec/shellprofile.d/hadoop-azure-datalake.sh
DEBUG: Profiles: importing 
hadoopInstall/hadoop-client/libexec/shellprofile.d/hadoop-azure.sh
DEBUG: Profiles: importing 
hadoopInstall/hadoop-client/libexec/shellprofile.d/hadoop-distcp.sh
DEBUG: Profiles: importing 
hadoopInstall/hadoop-client/libexec/shellprofile.d/hadoop-extras.sh
DEBUG: Profiles: importing 
hadoopInstall/hadoop-client/libexec/shellprofile.d/hadoop-gridmix.sh
DEBUG: Profiles: importing 
hadoopInstall/hadoop-client/libexec/shellprofile.d/hadoop-hdfs.sh
DEBUG: HADOOP_SHELL_PROFILES accepted hdfs
DEBUG: Profiles: importing 
hadoopInstall/hadoop-client/libexec/shellprofile.d/hadoop-httpfs.sh
DEBUG: Profiles: importing 
hadoopInstall/hadoop-client/libexec/shellprofile.d/hadoop-kafka.sh
DEBUG: Profiles: importing 
hadoopInstall/hadoop-client/libexec/shellprofile.d/hadoop-kms.sh
DEBUG: Profiles: importing 
hadoopInstall/hadoop-client/libexec/shellprofile.d/hadoop-mapreduce.sh
DEBUG: HADOOP_SHELL_PROFILES accepted mapred
DEBUG: Profiles: importing 
hadoopInstall/hadoop-client/libexec/shellprofile.d/hadoop-openstack.sh
DEBUG: Profiles: importing 
hadoopInstall/hadoop-client/libexec/shellprofile.d/hadoop-rumen.sh
DEBUG: Profiles: importing 
hadoopInstall/hadoop-client/libexec/shellprofile.d/hadoop-streaming.sh
DEBUG: Profiles: importing 
hadoopInstall/hadoop-client/libexec/shellprofile.d/hadoop-yarn.sh
{code}

> yarn-daemon.sh is not respecting --config option
> 
>
> Key: YARN-7011
> URL: https://issues.apache.org/jira/browse/YARN-7011
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Reporter: Sumana Sathish
>Priority: Blocker
> Fix For: 3.0.0-beta1
>
>
> Steps to reproduce:
> 1. Copy the conf to a temporary location /tmp/Conf
> 2. Modify anything in yarn-site.xml under /tmp/Conf/. Ex: Give invalid RM 
> address
> 3. Restart the resourcemanager using yarn-daemon.sh using --config /tmp/Conf
> 4. --config is not respected as the changes made in /tmp/Conf/yarn-site.xml 
> is not taken in while restarting RM



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5764) NUMA awareness support for launching containers

2017-08-15 Thread Devaraj K (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5764?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj K updated YARN-5764:

Attachment: YARN-5764-v3.patch

> NUMA awareness support for launching containers
> ---
>
> Key: YARN-5764
> URL: https://issues.apache.org/jira/browse/YARN-5764
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: nodemanager, yarn
>Reporter: Olasoji
>Assignee: Devaraj K
> Attachments: NUMA Awareness for YARN Containers.pdf, NUMA Performance 
> Results.pdf, YARN-5764-v0.patch, YARN-5764-v1.patch, YARN-5764-v2.patch, 
> YARN-5764-v3.patch
>
>
> The purpose of this feature is to improve Hadoop performance by minimizing 
> costly remote memory accesses on non SMP systems. Yarn containers, on launch, 
> will be pinned to a specific NUMA node and all subsequent memory allocations 
> will be served by the same node, reducing remote memory accesses. The current 
> default behavior is to spread memory across all NUMA nodes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6610) DominantResourceCalculator.getResourceAsValue() dominant param is no longer appropriate

2017-08-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16128019#comment-16128019
 ] 

Hadoop QA commented on YARN-6610:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-3926 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
58s{color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
30s{color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
0s{color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
30s{color} | {color:green} YARN-3926 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} | {color:green} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common 
generated 0 new + 4569 unchanged - 5 fixed = 4569 total (was 4574) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
29s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 25m 48s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6610 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12882033/YARN-6610.YARN-3926.006.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux bf843fd1c873 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | YARN-3926 / 8f80907 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/16918/testReport/ |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/16918/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> DominantResourceCalculator.getResourceAsValue() dominant param is no longer 
> appropriate
> ---
>
> Key: YARN-6610
> URL: https://issues.apache.org/jira/browse/YARN-6610
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Affects Versions: YARN-3926
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>  

[jira] [Created] (YARN-7020) TestAMRMProxy#testAMRMProxyTokenRenewal is flakey

2017-08-15 Thread Robert Kanter (JIRA)
Robert Kanter created YARN-7020:
---

 Summary: TestAMRMProxy#testAMRMProxyTokenRenewal is flakey
 Key: YARN-7020
 URL: https://issues.apache.org/jira/browse/YARN-7020
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 3.0.0-beta1
Reporter: Robert Kanter
Assignee: Robert Kanter


{{TestAMRMProxy#testAMRMProxyTokenRenewal}} is flakey.  It infrequently fails 
with:
{noformat}
testAMRMProxyTokenRenewal(org.apache.hadoop.yarn.client.api.impl.TestAMRMProxy) 
 Time elapsed: 19.036 sec  <<< ERROR!
org.apache.hadoop.yarn.exceptions.ApplicationAttemptNotFoundException: 
Application attempt appattempt_1502837054903_0001_01 doesn't exist in 
ApplicationMasterService cache.
at 
org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService.allocate(ApplicationMasterService.java:355)
at 
org.apache.hadoop.yarn.server.nodemanager.amrmproxy.DefaultRequestInterceptor$3.allocate(DefaultRequestInterceptor.java:224)
at 
org.apache.hadoop.yarn.server.nodemanager.amrmproxy.DefaultRequestInterceptor.allocate(DefaultRequestInterceptor.java:135)
at 
org.apache.hadoop.yarn.server.nodemanager.amrmproxy.AMRMProxyService.allocate(AMRMProxyService.java:279)
at 
org.apache.hadoop.yarn.api.impl.pb.service.ApplicationMasterProtocolPBServiceImpl.allocate(ApplicationMasterProtocolPBServiceImpl.java:60)
at 
org.apache.hadoop.yarn.proto.ApplicationMasterProtocol$ApplicationMasterProtocolService$2.callBlockingMethod(ApplicationMasterProtocol.java:99)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:869)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:815)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1962)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2675)

at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1490)
at org.apache.hadoop.ipc.Client.call(Client.java:1436)
at org.apache.hadoop.ipc.Client.call(Client.java:1346)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
at com.sun.proxy.$Proxy90.allocate(Unknown Source)
at 
org.apache.hadoop.yarn.api.impl.pb.client.ApplicationMasterProtocolPBClientImpl.allocate(ApplicationMasterProtocolPBClientImpl.java:77)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
at com.sun.proxy.$Proxy91.allocate(Unknown Source)
at 
org.apache.hadoop.yarn.client.api.impl.TestAMRMProxy.testAMRMProxyTokenRenewal(TestAMRMProxy.java:190)
{noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7019) Ability for applications to notify YARN about container reuse

2017-08-15 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16127996#comment-16127996
 ] 

Arun Suresh edited comment on YARN-7019 at 8/15/17 10:43 PM:
-

[~jlowe], given that Container reuse, espescially Tez's implementation of it, 
is pretty much NM agnostic (it is essentially a Tez AM - Tez container 
protocol). I was wondering if - instead of notifying the RM of the how many 
times a container has been re-used, maybe a more general way to solve this 
might be to introduce a *preempt-ability* score for a container. Initially, all 
containers of the AM are equally preemptible, but once the AM has say 're-used' 
a container certain number of times or perhaps decided to use the container for 
some best effort task, it can lower the preemptability score of the Container 
at the RM in the next allocate call. Thoughts ?


was (Author: asuresh):
[~jlowe], given that Container reuse, espescially Tez's implementation of it, 
is pretty much NM agnostic. I was wondering if - instead of notifying the RM of 
the how many times a container has been re-used, maybe a more general way to 
solve this might be to introduce a *preempt-ability* score for a container. 
Initially, all containers of the AM are equally preemptible, but once the AM 
has say 're-used' a container certain number of times or perhaps decided to use 
the container for some best effort task, it can lower the preemptability score 
of the Container at the RM in the next allocate call. Thoughts ?

> Ability for applications to notify YARN about container reuse
> -
>
> Key: YARN-7019
> URL: https://issues.apache.org/jira/browse/YARN-7019
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Jason Lowe
>
> During preemption calculations YARN can try to reduce the amount of work lost 
> by considering how long a container has been running.  However when an 
> application framework like Tez reuses a container across multiple tasks it 
> changes the work lost calculation since the container has essentially 
> checkpointed between task assignments.  It would be nice if applications 
> could inform YARN when a container has been reused/checkpointed and therefore 
> is a better candidate for preemption wrt. lost work than other, younger 
> containers.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7019) Ability for applications to notify YARN about container reuse

2017-08-15 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16127996#comment-16127996
 ] 

Arun Suresh commented on YARN-7019:
---

[~jlowe], given that Container reuse, espescially Tez's implementation of it, 
is pretty much NM agnostic. I was wondering if - instead of notifying the RM of 
the how many times a container has been re-used, maybe a more general way to 
solve this might be to introduce a *preempt-ability* score for a container. 
Initially, all containers of the AM are equally preemptible, but once the AM 
has say 're-used' a container certain number of times or perhaps decided to use 
the container for some best effort task, it can lower the preemptability score 
of the Container at the RM in the next allocate call. Thoughts ?

> Ability for applications to notify YARN about container reuse
> -
>
> Key: YARN-7019
> URL: https://issues.apache.org/jira/browse/YARN-7019
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Jason Lowe
>
> During preemption calculations YARN can try to reduce the amount of work lost 
> by considering how long a container has been running.  However when an 
> application framework like Tez reuses a container across multiple tasks it 
> changes the work lost calculation since the container has essentially 
> checkpointed between task assignments.  It would be nice if applications 
> could inform YARN when a container has been reused/checkpointed and therefore 
> is a better candidate for preemption wrt. lost work than other, younger 
> containers.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6992) "Kill application" button is present even if the application is FINISHED in RM UI

2017-08-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16127982#comment-16127982
 ] 

Hadoop QA commented on YARN-6992:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
 9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 14s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common: 
The patch generated 1 new + 21 unchanged - 0 fixed = 22 total (was 21) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
18s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 23m  7s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6992 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12882021/YARN-6992.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 77405a570e88 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / d265459 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/16916/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/16916/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/16916/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> "Kill application" button is present even if the application is FINISHED in 
> RM UI
> 

[jira] [Commented] (YARN-6610) DominantResourceCalculator.getResourceAsValue() dominant param is no longer appropriate

2017-08-15 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16127978#comment-16127978
 ] 

Daniel Templeton commented on YARN-6610:


My previous comment about performance was assuming that you're testing with 
more than 2 resources.  If you're testing with only 2 resources, then I'd be 
surprised to see much of a difference.  If that's the case, I can take a closer 
look.

> DominantResourceCalculator.getResourceAsValue() dominant param is no longer 
> appropriate
> ---
>
> Key: YARN-6610
> URL: https://issues.apache.org/jira/browse/YARN-6610
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Affects Versions: YARN-3926
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Critical
> Attachments: YARN-6610.001.patch, YARN-6610.YARN-3926.002.patch, 
> YARN-6610.YARN-3926.003.patch, YARN-6610.YARN-3926.004.patch, 
> YARN-6610.YARN-3926.005.patch, YARN-6610.YARN-3926.006.patch
>
>
> The {{dominant}} param assumes there are only two resources, i.e. true means 
> to compare the dominant, and false means to compare the subordinate.  Now 
> that there are _n_ resources, this parameter no longer makes sense.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6610) DominantResourceCalculator.getResourceAsValue() dominant param is no longer appropriate

2017-08-15 Thread Daniel Templeton (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton updated YARN-6610:
---
Attachment: YARN-6610.YARN-3926.006.patch

Added unit tests and fixed an error.

> DominantResourceCalculator.getResourceAsValue() dominant param is no longer 
> appropriate
> ---
>
> Key: YARN-6610
> URL: https://issues.apache.org/jira/browse/YARN-6610
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Affects Versions: YARN-3926
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Critical
> Attachments: YARN-6610.001.patch, YARN-6610.YARN-3926.002.patch, 
> YARN-6610.YARN-3926.003.patch, YARN-6610.YARN-3926.004.patch, 
> YARN-6610.YARN-3926.005.patch, YARN-6610.YARN-3926.006.patch
>
>
> The {{dominant}} param assumes there are only two resources, i.e. true means 
> to compare the dominant, and false means to compare the subordinate.  Now 
> that there are _n_ resources, this parameter no longer makes sense.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6257) CapacityScheduler REST API produces incorrect JSON - JSON object operationsInfo contains deplicate key

2017-08-15 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16127958#comment-16127958
 ] 

Wangda Tan commented on YARN-6257:
--

[~Tao Yang], 

Thanks for explanation, I just checked 
https://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/ResourceManagerRest.html#Cluster_Scheduler_API,
 health-info related ref and whole CapacitySchedulerHealthInfo is not a part of 
RMRest API doc. According to 
https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/Compatibility.html#REST_APIs,
 we should be good to modify such fields. 

[~sunilg] what do u think?

> CapacityScheduler REST API produces incorrect JSON - JSON object 
> operationsInfo contains deplicate key
> --
>
> Key: YARN-6257
> URL: https://issues.apache.org/jira/browse/YARN-6257
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacityscheduler
>Affects Versions: 2.8.1
>Reporter: Tao Yang
>Assignee: Tao Yang
>Priority: Minor
> Attachments: YARN-6257.001.patch
>
>
> In response string of CapacityScheduler REST API, 
> scheduler/schedulerInfo/health/operationsInfo have duplicate key 'entry' as a 
> JSON object :
> {code}
> "operationsInfo":{
>   
> "entry":{"key":"last-preemption","value":{"nodeId":"N/A","containerId":"N/A","queue":"N/A"}},
>   
> "entry":{"key":"last-reservation","value":{"nodeId":"N/A","containerId":"N/A","queue":"N/A"}},
>   
> "entry":{"key":"last-allocation","value":{"nodeId":"N/A","containerId":"N/A","queue":"N/A"}},
>   
> "entry":{"key":"last-release","value":{"nodeId":"N/A","containerId":"N/A","queue":"N/A"}}
> }
> {code}
> To solve this problem, I suppose the type of operationsInfo field in 
> CapacitySchedulerHealthInfo class should be converted from Map to List.
> After convert to List, The operationsInfo string will be:
> {code}
> "operationInfos":[
>   
> {"operation":"last-allocation","nodeId":"N/A","containerId":"N/A","queue":"N/A"},
>   
> {"operation":"last-release","nodeId":"N/A","containerId":"N/A","queue":"N/A"},
>   
> {"operation":"last-preemption","nodeId":"N/A","containerId":"N/A","queue":"N/A"},
>   
> {"operation":"last-reservation","nodeId":"N/A","containerId":"N/A","queue":"N/A"}
> ]
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6892) Improve API implementation in Resources and DominantResourceCalculator in align to ResourceInformation

2017-08-15 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6892?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-6892:
-
Attachment: YARN-6892-YARN-3926.004.patch

Fixed findbugs warning in the .004 patch.

> Improve API implementation in Resources and DominantResourceCalculator in 
> align to ResourceInformation
> --
>
> Key: YARN-6892
> URL: https://issues.apache.org/jira/browse/YARN-6892
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Sunil G
>Assignee: Sunil G
> Attachments: YARN-6892-YARN-3926.001.patch, 
> YARN-6892-YARN-3926.002.patch, YARN-6892-YARN-3926.003.patch, 
> YARN-6892-YARN-3926.004.patch
>
>
> In YARN-3926, apis in Resources and DRC spents significant cpu cycles in most 
> of its api. For better performance, its better to improve the apis as 
> resource types order is defined in system level (ResourceUtils class ensures 
> this post YARN-6788)
> This work is preceding to YARN-6788



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6988) container-executor fails for docker when command length > 4096 B

2017-08-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16127949#comment-16127949
 ] 

Hadoop QA commented on YARN-6988:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
 1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
36s{color} | {color:red} hadoop-yarn-server-nodemanager in the patch failed. 
{color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red}  0m 36s{color} | 
{color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 36s{color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 35s{color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 20m 42s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6988 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12882017/YARN-6988.001.patch |
| Optional Tests |  asflicense  compile  cc  mvnsite  javac  unit  |
| uname | Linux dd978eebe5b9 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / d265459 |
| Default Java | 1.8.0_144 |
| compile | 
https://builds.apache.org/job/PreCommit-YARN-Build/16915/artifact/patchprocess/patch-compile-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
| cc | 
https://builds.apache.org/job/PreCommit-YARN-Build/16915/artifact/patchprocess/patch-compile-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
| javac | 
https://builds.apache.org/job/PreCommit-YARN-Build/16915/artifact/patchprocess/patch-compile-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/16915/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/16915/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/16915/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> container-executor fails for docker when command length > 4096 B
> 
>
> Key: YARN-6988
> URL: https://issues.apache.org/jira/browse/YARN-6988
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>   

[jira] [Commented] (YARN-6736) Consider writing to both ats v1 & v2 from RM for smoother upgrades

2017-08-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16127919#comment-16127919
 ] 

Hadoop QA commented on YARN-6736:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
1s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  8s{color} 
| {color:red} YARN-6736 does not apply to YARN-5355. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-6736 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12882015/YARN-6736-YARN-5355.002.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/16914/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Consider writing to both ats v1 & v2 from RM for smoother upgrades
> --
>
> Key: YARN-6736
> URL: https://issues.apache.org/jira/browse/YARN-6736
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Aaron Gresch
> Attachments: YARN-6736-YARN-5355.001.patch, 
> YARN-6736-YARN-5355.002.patch
>
>
> When the cluster is being upgraded from atsv1 to v2, it may be good to have a 
> brief time period during which RM writes to both atsv1 and v2. This will help 
> frameworks like Tez migrate more smoothly. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6992) "Kill application" button is present even if the application is FINISHED in RM UI

2017-08-15 Thread Suma Shivaprasad (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6992?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suma Shivaprasad updated YARN-6992:
---
Description: 
Kill button should not be displayed for FAILED, KILLED and FINISHED apps in 
Application specific landing page


  was:Kill button should not be displayed for FAILED, KILLED and FINISHED apps


> "Kill application" button is present even if the application is FINISHED in 
> RM UI
> -
>
> Key: YARN-6992
> URL: https://issues.apache.org/jira/browse/YARN-6992
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Sumana Sathish
>Assignee: Suma Shivaprasad
> Attachments: YARN-6992.001.patch
>
>
> Kill button should not be displayed for FAILED, KILLED and FINISHED apps in 
> Application specific landing page



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6992) "Kill application" button is present even if the application is FINISHED in RM UI

2017-08-15 Thread Suma Shivaprasad (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6992?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suma Shivaprasad updated YARN-6992:
---
Attachment: YARN-6992.001.patch

> "Kill application" button is present even if the application is FINISHED in 
> RM UI
> -
>
> Key: YARN-6992
> URL: https://issues.apache.org/jira/browse/YARN-6992
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Sumana Sathish
>Assignee: Suma Shivaprasad
> Attachments: YARN-6992.001.patch
>
>
> Kill button should not be displayed for FAILED, KILLED and FINISHED apps in 
> Application specific landing page



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6992) "Kill application" button is present even if the application is FINISHED in RM UI

2017-08-15 Thread Suma Shivaprasad (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6992?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suma Shivaprasad updated YARN-6992:
---
Description: Kill button should not be displayed for FAILED, KILLED and 
FINISHED apps

> "Kill application" button is present even if the application is FINISHED in 
> RM UI
> -
>
> Key: YARN-6992
> URL: https://issues.apache.org/jira/browse/YARN-6992
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Sumana Sathish
>Assignee: Suma Shivaprasad
>
> Kill button should not be displayed for FAILED, KILLED and FINISHED apps



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6988) container-executor fails for docker when command length > 4096 B

2017-08-15 Thread Eric Badger (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Badger updated YARN-6988:
--
Attachment: YARN-6988.001.patch

Attaching a patch to increase the size of the docker commands to 128 KB. This 
would decouple it from EXECUTOR_PATH_MAX and not override the change that 
[~vvasudev] is making in YARN-6623. 

> container-executor fails for docker when command length > 4096 B
> 
>
> Key: YARN-6988
> URL: https://issues.apache.org/jira/browse/YARN-6988
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Eric Badger
>Assignee: Eric Badger
> Attachments: YARN-6988.001.patch
>
>
> {{run_docker}} and {{launch_docker_container_as_user}} allocate their command 
> arrays using EXECUTOR_PATH_MAX, which is hardcoded to 4096 in 
> configuration.h. Because of this, the full docker command can only be 4096 
> characters. If it is longer, it will be truncated and the command will fail 
> with a parsing error. Because of the bind-mounting of volumes, the arguments 
> to the docker command can quickly get large. For example, I passed the 4096 
> limit with an 11 disk node. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6736) Consider writing to both ats v1 & v2 from RM for smoother upgrades

2017-08-15 Thread Aaron Gresch (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6736?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron Gresch updated YARN-6736:
---
Attachment: YARN-6736-YARN-5355.002.patch

> Consider writing to both ats v1 & v2 from RM for smoother upgrades
> --
>
> Key: YARN-6736
> URL: https://issues.apache.org/jira/browse/YARN-6736
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Aaron Gresch
> Attachments: YARN-6736-YARN-5355.001.patch, 
> YARN-6736-YARN-5355.002.patch
>
>
> When the cluster is being upgraded from atsv1 to v2, it may be good to have a 
> brief time period during which RM writes to both atsv1 and v2. This will help 
> frameworks like Tez migrate more smoothly. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7014) container-executor has off-by-one error which can corrupt the heap

2017-08-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16127889#comment-16127889
 ] 

Hudson commented on YARN-7014:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #12191 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/12191/])
YARN-7014. Fix off-by-one error causing heap corruption (Jason Lowe via 
(nroberts: rev d265459024b8e5f5eccf421627f684ca8f162112)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/utils/string-utils.c


> container-executor has off-by-one error which can corrupt the heap
> --
>
> Key: YARN-7014
> URL: https://issues.apache.org/jira/browse/YARN-7014
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Affects Versions: 3.0.0-beta1
>Reporter: Shane Kumpf
>Assignee: Jason Lowe
>Priority: Critical
> Fix For: 3.0.0-beta1
>
> Attachments: YARN-7014.001.patch
>
>
> test-container-executor is failing in trunk.
> {code}
> [INFO] 
> [INFO] --- hadoop-maven-plugins:3.0.0-beta1-SNAPSHOT:cmake-test 
> (test-container-executor) @ hadoop-yarn-server-nodemanager ---
> [INFO] ---
> [INFO]  C M A K E B U I L D E RT E S T
> [INFO] ---
> [INFO] test-container-executor: running 
> /testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/target/native/target/usr/local/bin/test-container-executor
> [INFO] with extra environment variables {}
> [INFO] STATUS: ERROR CODE 134 after 3714 millisecond(s).
> [INFO] ---
> [INFO] 
> 
> [INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time: 13:47 min
> [INFO] Finished at: 2017-08-12T12:58:55+00:00
> [INFO] Final Memory: 19M/296M
> [INFO] 
> 
> [WARNING] The requested profile "parallel-tests" could not be activated 
> because it does not exist.
> [WARNING] The requested profile "yarn-ui" could not be activated because it 
> does not exist.
> [ERROR] Failed to execute goal 
> org.apache.hadoop:hadoop-maven-plugins:3.0.0-beta1-SNAPSHOT:cmake-test 
> (test-container-executor) on project hadoop-yarn-server-nodemanager: Test 
> /testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/target/native/target/usr/local/bin/test-container-executor
>  returned ERROR CODE 134 -> [Help 1]
> [ERROR] 
> [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
> switch.
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> [ERROR] 
> [ERROR] For more information about the errors and possible solutions, please 
> read the following articles:
> [ERROR] [Help 1] 
> http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7019) Ability for applications to notify YARN about container reuse

2017-08-15 Thread Jason Lowe (JIRA)
Jason Lowe created YARN-7019:


 Summary: Ability for applications to notify YARN about container 
reuse
 Key: YARN-7019
 URL: https://issues.apache.org/jira/browse/YARN-7019
 Project: Hadoop YARN
  Issue Type: New Feature
Reporter: Jason Lowe


During preemption calculations YARN can try to reduce the amount of work lost 
by considering how long a container has been running.  However when an 
application framework like Tez reuses a container across multiple tasks it 
changes the work lost calculation since the container has essentially 
checkpointed between task assignments.  It would be nice if applications could 
inform YARN when a container has been reused/checkpointed and therefore is a 
better candidate for preemption wrt. lost work than other, younger containers.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6820) Restrict read access to timelineservice v2 data

2017-08-15 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16127883#comment-16127883
 ] 

Vrushali C commented on YARN-6820:
--

Thanks [~jlowe] , appreciate it. 

> Restrict read access to timelineservice v2 data 
> 
>
> Key: YARN-6820
> URL: https://issues.apache.org/jira/browse/YARN-6820
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Vrushali C
>  Labels: yarn-5355-merge-blocker
> Fix For: YARN-5355, YARN-5355-branch-2
>
> Attachments: YARN-6820-YARN-5355.0001.patch, 
> YARN-6820-YARN-5355.002.patch, YARN-6820-YARN-5355.003.patch, 
> YARN-6820-YARN-5355.004.patch, YARN-6820-YARN-5355.005.patch, 
> YARN-6820-YARN-5355_branch_2.patch
>
>
> Need to provide a way to restrict read access in ATSv2. Not all users should 
> be able to read all entities. On the flip side, some folks may not need any 
> read restrictions, so we need to provide a way to disable this access 
> restriction as well. 
> Initially this access restriction could be done in a simple way via a 
> whitelist of users allowed to read data. That set of users can read all data, 
> no other user can read any data. Can be turned off for all users to read all 
> data.
> Could be stored in a "domain" table in hbase perhaps. Or a configuration 
> setting for the cluster. Or something else that's simple enough. ATSv1 has a 
> concept of domain for isolating users for reading. Would be good to keep that 
> in consideration. 
> In ATSv1, domain offers a namespace for Timeline server allowing users to 
> host multiple entities, isolating them from other users and applications. A 
> “Domain” in ATSV1 primarily stores owner info, read and& write ACL 
> information, created and modified time stamp information. Each Domain is 
> identified by an ID which must be unique across all users in the YARN cluster.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7018) Interface for adding extra behavior to node heartbeats

2017-08-15 Thread Jason Lowe (JIRA)
Jason Lowe created YARN-7018:


 Summary: Interface for adding extra behavior to node heartbeats
 Key: YARN-7018
 URL: https://issues.apache.org/jira/browse/YARN-7018
 Project: Hadoop YARN
  Issue Type: New Feature
  Components: resourcemanager
Reporter: Jason Lowe
Assignee: Jason Lowe


This JIRA tracks an interface for plugging in new behavior to node heartbeat 
processing.  Adding a formal interface for additional node heartbeat processing 
would allow admins to configure new functionality that is scheduler-independent 
without needing to replace the entire scheduler.  For example, both YARN-5202 
and YARN-5215 had approaches where node heartbeat processing was extended to 
implement new functionality that was essentially scheduler-independent and 
could be implemented as a plugin with this interface.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7014) container-executor has off-by-one error which can corrupt the heap

2017-08-15 Thread Nathan Roberts (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7014?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nathan Roberts updated YARN-7014:
-
Fix Version/s: 3.0.0-beta1

> container-executor has off-by-one error which can corrupt the heap
> --
>
> Key: YARN-7014
> URL: https://issues.apache.org/jira/browse/YARN-7014
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Affects Versions: 3.0.0-beta1
>Reporter: Shane Kumpf
>Assignee: Jason Lowe
>Priority: Critical
> Fix For: 3.0.0-beta1
>
> Attachments: YARN-7014.001.patch
>
>
> test-container-executor is failing in trunk.
> {code}
> [INFO] 
> [INFO] --- hadoop-maven-plugins:3.0.0-beta1-SNAPSHOT:cmake-test 
> (test-container-executor) @ hadoop-yarn-server-nodemanager ---
> [INFO] ---
> [INFO]  C M A K E B U I L D E RT E S T
> [INFO] ---
> [INFO] test-container-executor: running 
> /testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/target/native/target/usr/local/bin/test-container-executor
> [INFO] with extra environment variables {}
> [INFO] STATUS: ERROR CODE 134 after 3714 millisecond(s).
> [INFO] ---
> [INFO] 
> 
> [INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time: 13:47 min
> [INFO] Finished at: 2017-08-12T12:58:55+00:00
> [INFO] Final Memory: 19M/296M
> [INFO] 
> 
> [WARNING] The requested profile "parallel-tests" could not be activated 
> because it does not exist.
> [WARNING] The requested profile "yarn-ui" could not be activated because it 
> does not exist.
> [ERROR] Failed to execute goal 
> org.apache.hadoop:hadoop-maven-plugins:3.0.0-beta1-SNAPSHOT:cmake-test 
> (test-container-executor) on project hadoop-yarn-server-nodemanager: Test 
> /testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/target/native/target/usr/local/bin/test-container-executor
>  returned ERROR CODE 134 -> [Help 1]
> [ERROR] 
> [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
> switch.
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> [ERROR] 
> [ERROR] For more information about the errors and possible solutions, please 
> read the following articles:
> [ERROR] [Help 1] 
> http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6820) Restrict read access to timelineservice v2 data

2017-08-15 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe updated YARN-6820:
-
Fix Version/s: YARN-5355-branch-2

Thanks, Vrushali!  I committed the branch-2 patch to YARN-5355_branch2.

> Restrict read access to timelineservice v2 data 
> 
>
> Key: YARN-6820
> URL: https://issues.apache.org/jira/browse/YARN-6820
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Vrushali C
>  Labels: yarn-5355-merge-blocker
> Fix For: YARN-5355, YARN-5355-branch-2
>
> Attachments: YARN-6820-YARN-5355.0001.patch, 
> YARN-6820-YARN-5355.002.patch, YARN-6820-YARN-5355.003.patch, 
> YARN-6820-YARN-5355.004.patch, YARN-6820-YARN-5355.005.patch, 
> YARN-6820-YARN-5355_branch_2.patch
>
>
> Need to provide a way to restrict read access in ATSv2. Not all users should 
> be able to read all entities. On the flip side, some folks may not need any 
> read restrictions, so we need to provide a way to disable this access 
> restriction as well. 
> Initially this access restriction could be done in a simple way via a 
> whitelist of users allowed to read data. That set of users can read all data, 
> no other user can read any data. Can be turned off for all users to read all 
> data.
> Could be stored in a "domain" table in hbase perhaps. Or a configuration 
> setting for the cluster. Or something else that's simple enough. ATSv1 has a 
> concept of domain for isolating users for reading. Would be good to keep that 
> in consideration. 
> In ATSv1, domain offers a namespace for Timeline server allowing users to 
> host multiple entities, isolating them from other users and applications. A 
> “Domain” in ATSV1 primarily stores owner info, read and& write ACL 
> information, created and modified time stamp information. Each Domain is 
> identified by an ID which must be unique across all users in the YARN cluster.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7014) container-executor has off-by-one error which can corrupt the heap

2017-08-15 Thread Nathan Roberts (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16127824#comment-16127824
 ] 

Nathan Roberts commented on YARN-7014:
--

+1 on the patch. I will commit shortly.
Thanks [~jlowe] for the patch and  [~ebadger] and [~shaneku...@gmail.com] for 
the reviews!

> container-executor has off-by-one error which can corrupt the heap
> --
>
> Key: YARN-7014
> URL: https://issues.apache.org/jira/browse/YARN-7014
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Affects Versions: 3.0.0-beta1
>Reporter: Shane Kumpf
>Assignee: Jason Lowe
>Priority: Critical
> Attachments: YARN-7014.001.patch
>
>
> test-container-executor is failing in trunk.
> {code}
> [INFO] 
> [INFO] --- hadoop-maven-plugins:3.0.0-beta1-SNAPSHOT:cmake-test 
> (test-container-executor) @ hadoop-yarn-server-nodemanager ---
> [INFO] ---
> [INFO]  C M A K E B U I L D E RT E S T
> [INFO] ---
> [INFO] test-container-executor: running 
> /testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/target/native/target/usr/local/bin/test-container-executor
> [INFO] with extra environment variables {}
> [INFO] STATUS: ERROR CODE 134 after 3714 millisecond(s).
> [INFO] ---
> [INFO] 
> 
> [INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time: 13:47 min
> [INFO] Finished at: 2017-08-12T12:58:55+00:00
> [INFO] Final Memory: 19M/296M
> [INFO] 
> 
> [WARNING] The requested profile "parallel-tests" could not be activated 
> because it does not exist.
> [WARNING] The requested profile "yarn-ui" could not be activated because it 
> does not exist.
> [ERROR] Failed to execute goal 
> org.apache.hadoop:hadoop-maven-plugins:3.0.0-beta1-SNAPSHOT:cmake-test 
> (test-container-executor) on project hadoop-yarn-server-nodemanager: Test 
> /testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/target/native/target/usr/local/bin/test-container-executor
>  returned ERROR CODE 134 -> [Help 1]
> [ERROR] 
> [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
> switch.
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> [ERROR] 
> [ERROR] For more information about the errors and possible solutions, please 
> read the following articles:
> [ERROR] [Help 1] 
> http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6964) Fair scheduler misuses Resources operations

2017-08-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16127790#comment-16127790
 ] 

Hadoop QA commented on YARN-6964:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
50s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
15s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
37s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 50m 28s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}110m 52s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerAllocation |
|   | hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer |
| Timed out junit tests | 
org.apache.hadoop.yarn.server.resourcemanager.TestSubmitApplicationWithRMHA |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6964 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12881985/YARN-6964.006.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 98ec729bf67e 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / dadb0c2 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/16913/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/16913/testReport/ |
| modules | C: 

[jira] [Comment Edited] (YARN-7017) Enable preemption for a single queue.

2017-08-15 Thread Eric Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16127774#comment-16127774
 ] 

Eric Payne edited comment on YARN-7017 at 8/15/17 7:56 PM:
---

Thanks [~saruntek] for raising this issue.

{quote}
As of today the only way to enable preemption at a queue level is:
- Enable cluster level preemption
- Disable preemption on the queues where not required using 
yarn.scheduler.capacity..disable_preemption to true

{quote}
Actually, you don't need to {{disable_preemption}} for every queue. The 
{{disable_preemption}} property is inherited, so you can:
- Enable cluster level preemption
- Disable preemption on the root queue using 
{{yarn.scheduler.capacity.root.disable_preemption = true}}
- Enable preemption on the queues where required using 
{{yarn.scheduler.capacity..disable_preemption = false}}


was (Author: eepayne):
Thanks [~saruntek] for raising this issue.

{quote}
As of today the only way to enable preemption at a queue level is:
- Enable cluster level preemption
- Disable preemption on the queues where not required using 
yarn.scheduler.capacity..disable_preemption to true

{quote}
Actually, you don't need to {{disable_preemption}} for every queue. The 
{{disable_preemption}} property is inherited, so you can:
- Enable cluster level preemption
- Disable preemption on the root queue using 
{{yarn.scheduler.capacity.root.disable_preemption = true}}
- Enable preemption on the queues where required using 
{{yarn.scheduler.capacity..disable_preemption = true}}

> Enable preemption for a single queue.
> -
>
> Key: YARN-7017
> URL: https://issues.apache.org/jira/browse/YARN-7017
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: capacity scheduler, yarn
>Reporter: sarun
>
> *PROBLEM*
> How to enable preemption on a single queue in a cluster?
> *DESCRIPTION*
> As of today the only way to enable preemption at a queue level is:
> * Enable cluster level preemption
> * Disable preemption on the queues where not required using 
> yarn.scheduler.capacity..disable_preemption to true
> Can we have some sort of a parameter like 
> *_yarn.scheduler.capacity..enable_preemption_*
> which would just enable preemption per queue instead of going the other way 
> round which is more time consuming and error prone.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7017) Enable preemption for a single queue.

2017-08-15 Thread Eric Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16127774#comment-16127774
 ] 

Eric Payne commented on YARN-7017:
--

Thanks [~saruntek] for raising this issue.

{quote}
As of today the only way to enable preemption at a queue level is:
- Enable cluster level preemption
- Disable preemption on the queues where not required using 
yarn.scheduler.capacity..disable_preemption to true

{quote}
Actually, you don't need to {{disable_preemption}} for every queue. The 
{{disable_preemption}} property is inherited, so you can:
- Enable cluster level preemption
- Disable preemption on the root queue using 
{{yarn.scheduler.capacity.root.disable_preemption = true}}
- Enable preemption on the queues where required using 
{{yarn.scheduler.capacity..disable_preemption = true}}

> Enable preemption for a single queue.
> -
>
> Key: YARN-7017
> URL: https://issues.apache.org/jira/browse/YARN-7017
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: capacity scheduler, yarn
>Reporter: sarun
>
> *PROBLEM*
> How to enable preemption on a single queue in a cluster?
> *DESCRIPTION*
> As of today the only way to enable preemption at a queue level is:
> * Enable cluster level preemption
> * Disable preemption on the queues where not required using 
> yarn.scheduler.capacity..disable_preemption to true
> Can we have some sort of a parameter like 
> *_yarn.scheduler.capacity..enable_preemption_*
> which would just enable preemption per queue instead of going the other way 
> round which is more time consuming and error prone.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7017) Enable preemption for a single queue.

2017-08-15 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7017?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-7017:
---
Component/s: capacity scheduler

> Enable preemption for a single queue.
> -
>
> Key: YARN-7017
> URL: https://issues.apache.org/jira/browse/YARN-7017
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: capacity scheduler, yarn
>Reporter: sarun
>
> *PROBLEM*
> How to enable preemption on a single queue in a cluster?
> *DESCRIPTION*
> As of today the only way to enable preemption at a queue level is:
> * Enable cluster level preemption
> * Disable preemption on the queues where not required using 
> yarn.scheduler.capacity..disable_preemption to true
> Can we have some sort of a parameter like 
> *_yarn.scheduler.capacity..enable_preemption_*
> which would just enable preemption per queue instead of going the other way 
> round which is more time consuming and error prone.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7017) Enable preemption for a single queue.

2017-08-15 Thread sarun (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7017?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sarun updated YARN-7017:

Description: 
*PROBLEM*
How to enable preemption on a single queue in a cluster?
*DESCRIPTION*
As of today the only way to enable preemption at a queue level is:
* Enable cluster level preemption
* Disable preemption on the queues where not required using 
yarn.scheduler.capacity..disable_preemption to true

Can we have some sort of a parameter like 
*_yarn.scheduler.capacity..enable_preemption_*
which would just enable preemption per queue instead of going the other way 
round which is more time consuming and error prone.



  was:
PROBLEM
How to enable preemption on a single queue in a cluster?
DESCRIPTION
As of today the only way to enable preemption at a queue level is:
* Enable cluster level preemption
* Disable preemption on the queues where not required using 
yarn.scheduler.capacity..disable_preemption to true

Can we have some sort of a parameter like 
*_yarn.scheduler.capacity..enable_preemption_*
which would just enable preemption per queue instead of going the other way 
round which is more time consuming and error prone.




> Enable preemption for a single queue.
> -
>
> Key: YARN-7017
> URL: https://issues.apache.org/jira/browse/YARN-7017
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: yarn
>Reporter: sarun
>
> *PROBLEM*
> How to enable preemption on a single queue in a cluster?
> *DESCRIPTION*
> As of today the only way to enable preemption at a queue level is:
> * Enable cluster level preemption
> * Disable preemption on the queues where not required using 
> yarn.scheduler.capacity..disable_preemption to true
> Can we have some sort of a parameter like 
> *_yarn.scheduler.capacity..enable_preemption_*
> which would just enable preemption per queue instead of going the other way 
> round which is more time consuming and error prone.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7017) Enable preemption for a single queue.

2017-08-15 Thread sarun (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7017?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sarun updated YARN-7017:

Priority: Major  (was: Critical)

> Enable preemption for a single queue.
> -
>
> Key: YARN-7017
> URL: https://issues.apache.org/jira/browse/YARN-7017
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: yarn
>Reporter: sarun
>
> PROBLEM
> How to enable preemption on a single queue in a cluster?
> DESCRIPTION
> As of today the only way to enable preemption at a queue level is:
> * Enable cluster level preemption
> * Disable preemption on the queues where not required using 
> yarn.scheduler.capacity..disable_preemption to true
> Can we have some sort of a parameter like 
> *_yarn.scheduler.capacity..enable_preemption_*
> which would just enable preemption per queue instead of going the other way 
> round which is more time consuming and error prone.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7017) Enable preemption for a single queue.

2017-08-15 Thread sarun (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7017?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sarun updated YARN-7017:

Priority: Critical  (was: Major)

> Enable preemption for a single queue.
> -
>
> Key: YARN-7017
> URL: https://issues.apache.org/jira/browse/YARN-7017
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: yarn
>Reporter: sarun
>Priority: Critical
>
> PROBLEM
> How to enable preemption on a single queue in a cluster?
> DESCRIPTION
> As of today the only way to enable preemption at a queue level is:
> * Enable cluster level preemption
> * Disable preemption on the queues where not required using 
> yarn.scheduler.capacity..disable_preemption to true
> Can we have some sort of a parameter like 
> *_yarn.scheduler.capacity..enable_preemption_*
> which would just enable preemption per queue instead of going the other way 
> round which is more time consuming and error prone.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7017) Enable preemption for a single queue.

2017-08-15 Thread sarun (JIRA)
sarun created YARN-7017:
---

 Summary: Enable preemption for a single queue.
 Key: YARN-7017
 URL: https://issues.apache.org/jira/browse/YARN-7017
 Project: Hadoop YARN
  Issue Type: New Feature
  Components: yarn
Reporter: sarun


PROBLEM
How to enable preemption on a single queue in a cluster?
DESCRIPTION
As of today the only way to enable preemption at a queue level is:
* Enable cluster level preemption
* Disable preemption on the queues where not required using 
yarn.scheduler.capacity..disable_preemption to true

Can we have some sort of a parameter like 
*_yarn.scheduler.capacity..enable_preemption_*
which would just enable preemption per queue instead of going the other way 
round which is more time consuming and error prone.





--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6589) Recover all resources when NM restart

2017-08-15 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6589?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-6589:
-
Priority: Blocker  (was: Major)

> Recover all resources when NM restart
> -
>
> Key: YARN-6589
> URL: https://issues.apache.org/jira/browse/YARN-6589
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Yang Wang
>Assignee: Yang Wang
>Priority: Blocker
> Attachments: YARN-6589-YARN-3926.001.patch
>
>
> When NM restart, containers will be recovered. However, only memory and 
> vcores in capability have been recovered. All resources need to be recovered.
> {code:title=ContainerImpl.java}
>   // resource capability had been updated before NM was down
>   this.resource = 
> Resource.newInstance(recoveredCapability.getMemorySize(),
>   recoveredCapability.getVirtualCores());
> {code}
> It should be like this.
> {code:title=ContainerImpl.java}
>   // resource capability had been updated before NM was down
>   // need to recover all resources, not only 
>   this.resource = Resources.clone(recoveredCapability);
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6589) Recover all resources when NM restart

2017-08-15 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16127712#comment-16127712
 ] 

Wangda Tan commented on YARN-6589:
--

Thanks [~fly_in_gis] for reporting and working on this JIRA. 

Converted to YARN-3926 subtask, I think this is a blocker for YARN-3926 merge. 

Yang, could you update patch to address Jenkins reported issues? Patch looks 
good.

> Recover all resources when NM restart
> -
>
> Key: YARN-6589
> URL: https://issues.apache.org/jira/browse/YARN-6589
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Yang Wang
>Assignee: Yang Wang
> Attachments: YARN-6589-YARN-3926.001.patch
>
>
> When NM restart, containers will be recovered. However, only memory and 
> vcores in capability have been recovered. All resources need to be recovered.
> {code:title=ContainerImpl.java}
>   // resource capability had been updated before NM was down
>   this.resource = 
> Resource.newInstance(recoveredCapability.getMemorySize(),
>   recoveredCapability.getVirtualCores());
> {code}
> It should be like this.
> {code:title=ContainerImpl.java}
>   // resource capability had been updated before NM was down
>   // need to recover all resources, not only 
>   this.resource = Resources.clone(recoveredCapability);
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7016) Consider using ZKCuratorManager in CuratorService

2017-08-15 Thread JIRA
Íñigo Goiri created YARN-7016:
-

 Summary: Consider using ZKCuratorManager in CuratorService
 Key: YARN-7016
 URL: https://issues.apache.org/jira/browse/YARN-7016
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Íñigo Goiri


{{CuratorService}} uses the curator framework and this has been wrapped in 
{{ZKCuratorManager}}. It would be good to make it use the common framework.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6589) Recover all resources when NM restart

2017-08-15 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6589?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-6589:
-
Issue Type: Sub-task  (was: Bug)
Parent: YARN-3926

> Recover all resources when NM restart
> -
>
> Key: YARN-6589
> URL: https://issues.apache.org/jira/browse/YARN-6589
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Yang Wang
>Assignee: Yang Wang
> Attachments: YARN-6589-YARN-3926.001.patch
>
>
> When NM restart, containers will be recovered. However, only memory and 
> vcores in capability have been recovered. All resources need to be recovered.
> {code:title=ContainerImpl.java}
>   // resource capability had been updated before NM was down
>   this.resource = 
> Resource.newInstance(recoveredCapability.getMemorySize(),
>   recoveredCapability.getVirtualCores());
> {code}
> It should be like this.
> {code:title=ContainerImpl.java}
>   // resource capability had been updated before NM was down
>   // need to recover all resources, not only 
>   this.resource = Resources.clone(recoveredCapability);
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-65) Reduce RM app memory footprint once app has completed

2017-08-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-65?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16127700#comment-16127700
 ] 

Hadoop QA commented on YARN-65:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 28s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 1 new + 388 unchanged - 2 fixed = 389 total (was 390) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 44m 41s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 68m 51s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerAllocation |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-65 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12881977/YARN-65.009.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux ebb291734826 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / dadb0c2 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/16911/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/16911/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/16911/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/16911/console |
| Powered by | Apache Yetus 

[jira] [Commented] (YARN-6959) RM may allocate wrong AM Container for new attempt

2017-08-15 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16127696#comment-16127696
 ] 

Jian He commented on YARN-6959:
---

[~yqwang], TestFairScheduler is failing with the patch , can you take a look ?

> RM may allocate wrong AM Container for new attempt
> --
>
> Key: YARN-6959
> URL: https://issues.apache.org/jira/browse/YARN-6959
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler, fairscheduler, scheduler
>Affects Versions: 2.7.1
>Reporter: Yuqi Wang
>Assignee: Yuqi Wang
>  Labels: patch
> Fix For: 2.8.0, 2.7.1, 3.0.0-alpha4
>
> Attachments: YARN-6959.001.patch, YARN-6959.002.patch, 
> YARN-6959.003.patch, YARN-6959.004.patch, YARN-6959.005.patch, 
> YARN-6959-branch-2.7.001.patch, YARN-6959-branch-2.7.002.patch, 
> YARN-6959-branch-2.8.001.patch, YARN-6959.yarn_nm.log.zip, 
> YARN-6959.yarn_rm.log.zip
>
>
> *Issue Summary:*
> Previous attempt ResourceRequest may be recorded into current attempt 
> ResourceRequests. These mis-recorded ResourceRequests may confuse AM 
> Container Request and Allocation for current attempt.
> *Issue Pipeline:*
> {code:java}
> // Executing precondition check for the incoming attempt id.
> ApplicationMasterService.allocate() ->
> scheduler.allocate(attemptId, ask, ...) ->
> // Previous precondition check for the attempt id may be outdated here, 
> // i.e. the currentAttempt may not be the corresponding attempt of the 
> attemptId.
> // Such as the attempt id is corresponding to the previous attempt.
> currentAttempt = scheduler.getApplicationAttempt(attemptId) ->
> // Previous attempt ResourceRequest may be recorded into current attempt 
> ResourceRequests
> currentAttempt.updateResourceRequests(ask) ->
> // RM may allocate wrong AM Container for the current attempt, because its 
> ResourceRequests
> // may come from previous attempt which can be any ResourceRequests previous 
> AM asked
> // and there is not matching logic for the original AM Container 
> ResourceRequest and 
> // the returned amContainerAllocation below.
> AMContainerAllocatedTransition.transition(...) ->
> amContainerAllocation = scheduler.allocate(currentAttemptId, ...)
> {code}
> *Patch Correctness:*
> Because after this Patch, RM will definitely record ResourceRequests from 
> different attempt into different objects of 
> SchedulerApplicationAttempt.AppSchedulingInfo.
> So, even if RM still record ResourceRequests from old attempt at any time, 
> these ResourceRequests will be recorded in old AppSchedulingInfo object which 
> will not impact current attempt's resource requests and allocation.
> *Concerns:*
> The getApplicationAttempt function in AbstractYarnScheduler is so confusing, 
> we should better rename it to getCurrentApplicationAttempt. And reconsider 
> whether there are any other bugs related to getApplicationAttempt.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6820) Restrict read access to timelineservice v2 data

2017-08-15 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16127673#comment-16127673
 ] 

Vrushali C commented on YARN-6820:
--

Okay, so the branch-2 branch name is " YARN-5355_branch2 " . 

Here are the latest commits
https://github.com/apache/hadoop/commits/YARN-5355

https://github.com/apache/hadoop/commits/YARN-5355_branch2

The commits are not in the same order but pretty much the same across both.


> Restrict read access to timelineservice v2 data 
> 
>
> Key: YARN-6820
> URL: https://issues.apache.org/jira/browse/YARN-6820
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Vrushali C
>  Labels: yarn-5355-merge-blocker
> Fix For: YARN-5355
>
> Attachments: YARN-6820-YARN-5355.0001.patch, 
> YARN-6820-YARN-5355.002.patch, YARN-6820-YARN-5355.003.patch, 
> YARN-6820-YARN-5355.004.patch, YARN-6820-YARN-5355.005.patch, 
> YARN-6820-YARN-5355_branch_2.patch
>
>
> Need to provide a way to restrict read access in ATSv2. Not all users should 
> be able to read all entities. On the flip side, some folks may not need any 
> read restrictions, so we need to provide a way to disable this access 
> restriction as well. 
> Initially this access restriction could be done in a simple way via a 
> whitelist of users allowed to read data. That set of users can read all data, 
> no other user can read any data. Can be turned off for all users to read all 
> data.
> Could be stored in a "domain" table in hbase perhaps. Or a configuration 
> setting for the cluster. Or something else that's simple enough. ATSv1 has a 
> concept of domain for isolating users for reading. Would be good to keep that 
> in consideration. 
> In ATSv1, domain offers a namespace for Timeline server allowing users to 
> host multiple entities, isolating them from other users and applications. A 
> “Domain” in ATSV1 primarily stores owner info, read and& write ACL 
> information, created and modified time stamp information. Each Domain is 
> identified by an ID which must be unique across all users in the YARN cluster.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6996) Change javax.cache library implementation from JSR107 to Apache Geronimo

2017-08-15 Thread Ray Chiang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16127671#comment-16127671
 ] 

Ray Chiang commented on YARN-6996:
--

Thanks [~subru] and [~busbey]!

> Change javax.cache library implementation from JSR107 to Apache Geronimo
> 
>
> Key: YARN-6996
> URL: https://issues.apache.org/jira/browse/YARN-6996
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.0.0-beta1
>Reporter: Ray Chiang
>Assignee: Ray Chiang
>Priority: Blocker
> Fix For: 3.0.0-beta1
>
> Attachments: YARN-6996.001.patch
>
>
> With YARN Federation, we added YARN-3672, which adds the following to 
> {noformat}
> javax.cache
> cache-api
> {noformat}
> This third-party library has some murky license history, as documented in 
> this [really long comment 
> thread|https://github.com/jsr107/jsr107spec/issues/333].  The summary of the 
> thread is that "the library is officially APL (take our word for it), but 
> there hasn't been a subsequent release with the license file change".
> LEGAL-325 has been filed to discuss the validity of this license for Apache.
> Before we get to final Hadoop 3 release, I'm wondering if anyone else has 
> concerns about using this library.  Just from looking at the various javax 
> Maven artifacts in our pom.xml files, I see a lot of other javax.* library 
> entries (although we may not ship the .jars if they're part of the Java 
> runtime).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6739) Crash NM at start time if oversubscription is on but LinuxContainerExcutor or cgroup is off

2017-08-15 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16127664#comment-16127664
 ] 

Haibo Chen commented on YARN-6739:
--

As a follow up, turn on cpu & memory cgroup and strict resource usage mode if 
oversubscription is enabled.

> Crash NM at start time if oversubscription is on but LinuxContainerExcutor or 
> cgroup is off
> ---
>
> Key: YARN-6739
> URL: https://issues.apache.org/jira/browse/YARN-6739
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Affects Versions: 3.0.0-alpha3
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6964) Fair scheduler misuses Resources operations

2017-08-15 Thread Daniel Templeton (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6964?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton updated YARN-6964:
---
Attachment: YARN-6964.006.patch

Per offline discussion, 1, 2.1, 3, and 4 are fine.  Attaching a patch that 
addresses 2.2 better.

> Fair scheduler misuses Resources operations
> ---
>
> Key: YARN-6964
> URL: https://issues.apache.org/jira/browse/YARN-6964
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 3.0.0-alpha4
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
> Attachments: YARN-6964.001.patch, YARN-6964.002.patch, 
> YARN-6964.003.patch, YARN-6964.004.patch, YARN-6964.005.patch, 
> YARN-6964.006.patch
>
>
> There are several places where YARN uses the {{Resources}} class to do 
> comparisons of {{Resource}} instances incorrectly.  This patch corrects those 
> mistakes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



  1   2   >