[jira] [Assigned] (YARN-8319) More YARN pages need to honor yarn.resourcemanager.display.per-user-apps

2018-05-18 Thread Sunil Govindan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil Govindan reassigned YARN-8319:


Assignee: Sunil Govindan

> More YARN pages need to honor yarn.resourcemanager.display.per-user-apps
> 
>
> Key: YARN-8319
> URL: https://issues.apache.org/jira/browse/YARN-8319
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: webapp
>Reporter: Vinod Kumar Vavilapalli
>Assignee: Sunil Govindan
>Priority: Major
>
> When this config is on
>  - Per queue page on UI2 should filter app list by user
>  -- TODO: Verify the same with UI1 Per-queue page
>  - ATSv2 with UI2 should filter list of all users' flows and flow activities
>  - Per Node pages
>  -- Listing of apps and containers on a per-node basis should filter apps and 
> containers by user.
> To this end, because this is no longer just for resourcemanager, we should 
> also deprecate {{yarn.resourcemanager.display.per-user-apps}} in favor of 
> {{yarn.webapp.filter-app-list-by-user}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7494) Add muti node lookup support for better placement

2018-05-18 Thread Sunil Govindan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7494?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil Govindan updated YARN-7494:
-
Attachment: YARN-7494.008.patch

> Add muti node lookup support for better placement
> -
>
> Key: YARN-7494
> URL: https://issues.apache.org/jira/browse/YARN-7494
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Sunil Govindan
>Assignee: Sunil Govindan
>Priority: Major
> Attachments: YARN-7494.001.patch, YARN-7494.002.patch, 
> YARN-7494.003.patch, YARN-7494.004.patch, YARN-7494.005.patch, 
> YARN-7494.006.patch, YARN-7494.007.patch, YARN-7494.008.patch, 
> YARN-7494.v0.patch, YARN-7494.v1.patch, multi-node-designProposal.png
>
>
> Instead of single node, for effectiveness we can consider a multi node lookup 
> based on partition to start with.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7494) Add muti node lookup support for better placement

2018-05-18 Thread Sunil Govindan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7494?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16480403#comment-16480403
 ] 

Sunil Govindan commented on YARN-7494:
--

Thanks [~cheersyang] and [~leftnoteasy]

Attaching latest patch addressing comments. Pls find some clarifications for 
few comments.

bq.In another word, please make sure {{getNodesPerPartition}} returns all nodes 
if there is no partition configured.

Yes. I tested this locally. Also added test case to ensure that nodes are 
returned in normal cluster setup case and also when labels are configured.

bq.you can't call getCSLeafQueue because it assumes it is CS scheduler

I added to Queue base class. This comes correctly now and wont cause any test 
issue. Its better to handle this way otherwise we have to write few apis to 
achieve same from CS to app

bq.line 57: {{#getPreferredNodeIterator}} this API should not depend on 
{{Collection nodes}}, can you please double check.

Default policy will accept nodes from CS and sort it on demand. 
ResourceUsageBased will fetch from Sorter thread. Wangda also suggested to keep 
both. Hence i added this api to handle both scenarios

bq.what's the difference of this class to DefaultMultiNodeLookupPolicy

As I mentioned above, DefaultMultiNodeLookupPolicy wont take nodes from Sorted 
thread rather it will do sort on demand. This is a choice to user and enabled 
as needed.

> Add muti node lookup support for better placement
> -
>
> Key: YARN-7494
> URL: https://issues.apache.org/jira/browse/YARN-7494
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Sunil Govindan
>Assignee: Sunil Govindan
>Priority: Major
> Attachments: YARN-7494.001.patch, YARN-7494.002.patch, 
> YARN-7494.003.patch, YARN-7494.004.patch, YARN-7494.005.patch, 
> YARN-7494.006.patch, YARN-7494.007.patch, YARN-7494.008.patch, 
> YARN-7494.v0.patch, YARN-7494.v1.patch, multi-node-designProposal.png
>
>
> Instead of single node, for effectiveness we can consider a multi node lookup 
> based on partition to start with.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8297) Incorrect ATS Url used for Wire encrypted cluster

2018-05-22 Thread Sunil Govindan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16483819#comment-16483819
 ] 

Sunil Govindan commented on YARN-8297:
--

[~rohithsharma], could u pls help to check the patch.

> Incorrect ATS Url used for Wire encrypted cluster
> -
>
> Key: YARN-8297
> URL: https://issues.apache.org/jira/browse/YARN-8297
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Affects Versions: 3.1.0
>Reporter: Yesha Vora
>Assignee: Sunil Govindan
>Priority: Blocker
> Fix For: 3.2.0, 3.1.1
>
> Attachments: YARN-8297-addendum.patch, YARN-8297.001.patch
>
>
> "Service" page uses incorrect web url for ATS in wire encrypted env. For ATS 
> urls, it uses https protocol with http port.
> This issue causes all ATS call to fail and UI does not display component 
> details.
> url used: 
> https://xxx:8198/ws/v2/timeline/apps/application_1526357251888_0022/entities/SERVICE_ATTEMPT?fields=ALL&_=1526415938320
> expected url : 
> https://xxx:8199/ws/v2/timeline/apps/application_1526357251888_0022/entities/SERVICE_ATTEMPT?fields=ALL&_=1526415938320



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8297) Incorrect ATS Url used for Wire encrypted cluster

2018-05-22 Thread Sunil Govindan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8297?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil Govindan updated YARN-8297:
-
Attachment: YARN-8297-addendum.patch

> Incorrect ATS Url used for Wire encrypted cluster
> -
>
> Key: YARN-8297
> URL: https://issues.apache.org/jira/browse/YARN-8297
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Affects Versions: 3.1.0
>Reporter: Yesha Vora
>Assignee: Sunil Govindan
>Priority: Blocker
> Fix For: 3.2.0, 3.1.1
>
> Attachments: YARN-8297-addendum.patch, YARN-8297.001.patch
>
>
> "Service" page uses incorrect web url for ATS in wire encrypted env. For ATS 
> urls, it uses https protocol with http port.
> This issue causes all ATS call to fail and UI does not display component 
> details.
> url used: 
> https://xxx:8198/ws/v2/timeline/apps/application_1526357251888_0022/entities/SERVICE_ATTEMPT?fields=ALL&_=1526415938320
> expected url : 
> https://xxx:8199/ws/v2/timeline/apps/application_1526357251888_0022/entities/SERVICE_ATTEMPT?fields=ALL&_=1526415938320



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Reopened] (YARN-8297) Incorrect ATS Url used for Wire encrypted cluster

2018-05-22 Thread Sunil Govindan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8297?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil Govindan reopened YARN-8297:
--

This issue is not cleanly handled. Needed an addendum patch

> Incorrect ATS Url used for Wire encrypted cluster
> -
>
> Key: YARN-8297
> URL: https://issues.apache.org/jira/browse/YARN-8297
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Affects Versions: 3.1.0
>Reporter: Yesha Vora
>Assignee: Sunil Govindan
>Priority: Blocker
> Fix For: 3.2.0, 3.1.1
>
> Attachments: YARN-8297-addendum.patch, YARN-8297.001.patch
>
>
> "Service" page uses incorrect web url for ATS in wire encrypted env. For ATS 
> urls, it uses https protocol with http port.
> This issue causes all ATS call to fail and UI does not display component 
> details.
> url used: 
> https://xxx:8198/ws/v2/timeline/apps/application_1526357251888_0022/entities/SERVICE_ATTEMPT?fields=ALL&_=1526415938320
> expected url : 
> https://xxx:8199/ws/v2/timeline/apps/application_1526357251888_0022/entities/SERVICE_ATTEMPT?fields=ALL&_=1526415938320



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4781) Support intra-queue preemption for fairness ordering policy.

2018-05-22 Thread Sunil Govindan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16484049#comment-16484049
 ] 

Sunil Govindan commented on YARN-4781:
--

Hi [~eepayne]

Latest patch looks good to me. I tried to test this in a local cluster and 
looks fine.

However i have not verified case where FairOrdering policy could be used with 
weights. Did you get chance to cross check the same as well? Thanks.

Other than this, i  am good with this patch to commit.

> Support intra-queue preemption for fairness ordering policy.
> 
>
> Key: YARN-4781
> URL: https://issues.apache.org/jira/browse/YARN-4781
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler
>Reporter: Wangda Tan
>Assignee: Eric Payne
>Priority: Major
> Attachments: YARN-4781.001.patch, YARN-4781.002.patch, 
> YARN-4781.003.patch, YARN-4781.004.patch, YARN-4781.005.patch
>
>
> We introduced fairness queue policy since YARN-3319, which will let large 
> applications make progresses and not starve small applications. However, if a 
> large application takes the queue’s resources, and containers of the large 
> app has long lifespan, small applications could still wait for resources for 
> long time and SLAs cannot be guaranteed.
> Instead of wait for application release resources on their own, we need to 
> preempt resources of queue with fairness policy enabled.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8351) RM is flooded with node attributes manager logs

2018-05-23 Thread Sunil Govindan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488368#comment-16488368
 ] 

Sunil Govindan commented on YARN-8351:
--

Patch looks straight forward. Committing shortly. Pending jenkins.

> RM is flooded with node attributes manager logs
> ---
>
> Key: YARN-8351
> URL: https://issues.apache.org/jira/browse/YARN-8351
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Major
> Attachments: YARN-8351-YARN-3409.001.patch, YARN-8351.001.patch
>
>
> When distributed node attributes enabled, RM updates these attributes on each 
> NM HB interval, and each time it creates a log like
> {noformat}
> REPLACE attributes on nodes: NM="xxx", attributes=""
> {noformat}
> this should be in DEBUG level.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8346) Upgrading to 3.1 kills running containers with error "Opportunistic container queue is full"

2018-05-23 Thread Sunil Govindan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8346?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil Govindan updated YARN-8346:
-
Issue Type: Sub-task  (was: Bug)
Parent: YARN-8347

> Upgrading to 3.1 kills running containers with error "Opportunistic container 
> queue is full"
> 
>
> Key: YARN-8346
> URL: https://issues.apache.org/jira/browse/YARN-8346
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Rohith Sharma K S
>Priority: Major
>
> It is observed while rolling upgrade from 2.8.4 to 3.1 release, all the 
> running containers are killed and second attempt is launched for that 
> application. The diagnostics message is "Opportunistic container queue is 
> full" which is the reason for container killed. 
> In NM log, I see below logs for after container is recovered.
> {noformat}
> 2018-05-23 17:18:50,655 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.scheduler.ContainerScheduler:
>  Opportunistic container [container_e06_1527075664705_0001_01_01] will 
> not be queued at the NMsince max queue length [0] has been reached
> {noformat}
> Following steps are executed for rolling upgrade
> # Install 2.8.4 cluster and launch a MR job with distributed cache enabled.
> # Stop 2.8.4 RM. Start 3.1.0 RM with same configuration.
> # Stop 2.8.4 NM batch by batch. Start 3.1.0 NM batch by batch. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-8347) [Umbrella] Upgrade efforts to Hadoop 3.x

2018-05-23 Thread Sunil Govindan (JIRA)
Sunil Govindan created YARN-8347:


 Summary: [Umbrella] Upgrade efforts to Hadoop 3.x
 Key: YARN-8347
 URL: https://issues.apache.org/jira/browse/YARN-8347
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Sunil Govindan


This is an umbrella ticket to manage all similar efforts to close gaps for 
upgrade efforts to 3.x.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8347) [Umbrella] Upgrade efforts to Hadoop 3.x

2018-05-23 Thread Sunil Govindan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8347?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16487202#comment-16487202
 ] 

Sunil Govindan commented on YARN-8347:
--

cc /[~leftnoteasy]

> [Umbrella] Upgrade efforts to Hadoop 3.x
> 
>
> Key: YARN-8347
> URL: https://issues.apache.org/jira/browse/YARN-8347
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Sunil Govindan
>Priority: Major
>
> This is an umbrella ticket to manage all similar efforts to close gaps for 
> upgrade efforts to 3.x.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8319) More YARN pages need to honor yarn.resourcemanager.display.per-user-apps

2018-05-23 Thread Sunil Govindan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16487084#comment-16487084
 ] 

Sunil Govindan commented on YARN-8319:
--

Updated patch with test case. [~rohithsharma] pls help to check.

> More YARN pages need to honor yarn.resourcemanager.display.per-user-apps
> 
>
> Key: YARN-8319
> URL: https://issues.apache.org/jira/browse/YARN-8319
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: webapp
>Reporter: Vinod Kumar Vavilapalli
>Assignee: Sunil Govindan
>Priority: Major
> Attachments: YARN-8319.001.patch, YARN-8319.002.patch, 
> YARN-8319.003.patch
>
>
> When this config is on
>  - Per queue page on UI2 should filter app list by user
>  -- TODO: Verify the same with UI1 Per-queue page
>  - ATSv2 with UI2 should filter list of all users' flows and flow activities
>  - Per Node pages
>  -- Listing of apps and containers on a per-node basis should filter apps and 
> containers by user.
> To this end, because this is no longer just for resourcemanager, we should 
> also deprecate {{yarn.resourcemanager.display.per-user-apps}} in favor of 
> {{yarn.webapp.filter-app-list-by-user}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8319) More YARN pages need to honor yarn.resourcemanager.display.per-user-apps

2018-05-23 Thread Sunil Govindan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil Govindan updated YARN-8319:
-
Attachment: YARN-8319.003.patch

> More YARN pages need to honor yarn.resourcemanager.display.per-user-apps
> 
>
> Key: YARN-8319
> URL: https://issues.apache.org/jira/browse/YARN-8319
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: webapp
>Reporter: Vinod Kumar Vavilapalli
>Assignee: Sunil Govindan
>Priority: Major
> Attachments: YARN-8319.001.patch, YARN-8319.002.patch, 
> YARN-8319.003.patch
>
>
> When this config is on
>  - Per queue page on UI2 should filter app list by user
>  -- TODO: Verify the same with UI1 Per-queue page
>  - ATSv2 with UI2 should filter list of all users' flows and flow activities
>  - Per Node pages
>  -- Listing of apps and containers on a per-node basis should filter apps and 
> containers by user.
> To this end, because this is no longer just for resourcemanager, we should 
> also deprecate {{yarn.resourcemanager.display.per-user-apps}} in favor of 
> {{yarn.webapp.filter-app-list-by-user}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8346) Upgrading to 3.1 kills running containers with error "Opportunistic container queue is full"

2018-05-23 Thread Sunil Govindan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8346?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil Govindan updated YARN-8346:
-
Target Version/s: 3.1.1, 3.0.3

> Upgrading to 3.1 kills running containers with error "Opportunistic container 
> queue is full"
> 
>
> Key: YARN-8346
> URL: https://issues.apache.org/jira/browse/YARN-8346
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 3.1.0, 3.0.2
>Reporter: Rohith Sharma K S
>Priority: Blocker
>
> It is observed while rolling upgrade from 2.8.4 to 3.1 release, all the 
> running containers are killed and second attempt is launched for that 
> application. The diagnostics message is "Opportunistic container queue is 
> full" which is the reason for container killed. 
> In NM log, I see below logs for after container is recovered.
> {noformat}
> 2018-05-23 17:18:50,655 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.scheduler.ContainerScheduler:
>  Opportunistic container [container_e06_1527075664705_0001_01_01] will 
> not be queued at the NMsince max queue length [0] has been reached
> {noformat}
> Following steps are executed for rolling upgrade
> # Install 2.8.4 cluster and launch a MR job with distributed cache enabled.
> # Stop 2.8.4 RM. Start 3.1.0 RM with same configuration.
> # Stop 2.8.4 NM batch by batch. Start 3.1.0 NM batch by batch. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8353) LightWeightResource's hashCode function is different from parent class

2018-05-24 Thread Sunil Govindan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488649#comment-16488649
 ] 

Sunil Govindan commented on YARN-8353:
--

Hi [~LongGang]

We have 2 types of resources. For internal handling in scheduler etc, we are 
using a {{LightWeightResource's}} instead of PBImpl object to avoid the  
bulkiness and to improve performance. PBImpl is needed only when resource is 
talked to outside world. Hence it is different.

Now we can look into {{ContainerUpdateContext}} and ensure both api use same 
type objects.

> LightWeightResource's hashCode function is different from parent class
> --
>
> Key: YARN-8353
> URL: https://issues.apache.org/jira/browse/YARN-8353
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: RM
>Affects Versions: 3.0.x
>Reporter: LongGang Chen
>Priority: Major
>
> LightWeightResource's hashCode function is different from parent class.
> One of the consequences is: 
> ContainerUpdateContext.removeFromOutstandingUpdate will nor work correct,
> ContainerUpdateContext.outstandingIncreases will has smelly datas.
> a simple test:
> {code:java}
> public void testHashCode() throws Exception{         
> Resource resource = Resources.createResource(10,10);         
> Resource resource1 = new ResourcePBImpl();       
> resource1.setMemorySize(10L);         
> resource1.setVirtualCores(10);         
> int x = resource.hashCode();         
> int y = resource1.hashCode();        
> Assert.assertEquals(x, y); 
> }
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8351) Node attribute manager logs are flooding RM logs

2018-05-25 Thread Sunil Govindan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil Govindan updated YARN-8351:
-
Summary: Node attribute manager logs are flooding RM logs  (was: RM is 
flooded with node attributes manager logs)

> Node attribute manager logs are flooding RM logs
> 
>
> Key: YARN-8351
> URL: https://issues.apache.org/jira/browse/YARN-8351
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Major
> Attachments: YARN-8351-YARN-3409.001.patch, YARN-8351.001.patch
>
>
> When distributed node attributes enabled, RM updates these attributes on each 
> NM HB interval, and each time it creates a log like
> {noformat}
> REPLACE attributes on nodes: NM="xxx", attributes=""
> {noformat}
> this should be in DEBUG level.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8197) Tracking URL in the app state does not get redirected to MR ApplicationMaster for Running applications

2018-05-25 Thread Sunil Govindan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16490807#comment-16490807
 ] 

Sunil Govindan commented on YARN-8197:
--

Thanks [~vinodkv]. Updated patch as per the above understanding.

> Tracking URL in the app state does not get redirected to MR ApplicationMaster 
> for Running applications
> --
>
> Key: YARN-8197
> URL: https://issues.apache.org/jira/browse/YARN-8197
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Reporter: Sumana Sathish
>Assignee: Sunil Govindan
>Priority: Critical
> Attachments: YARN-8197.001.patch
>
>
> {code}
> org.eclipse.jetty.servlet.ServletHandler:
> javax.servlet.ServletException: Could not determine the proxy server for 
> redirection
>   at 
> org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter.findRedirectUrl(AmIpFilter.java:211)
>   at 
> org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter.doFilter(AmIpFilter.java:145)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
>   at 
> org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1617)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
>   at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
>   at org.eclipse.jetty.server.Server.handle(Server.java:534)
>   at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320)
>   at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
>   at 
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:283)
>   at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:108)
>   at 
> org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
>   at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
>   at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
>   at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)
>   at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8197) Tracking URL in the app state does not get redirected to MR ApplicationMaster for Running applications

2018-05-25 Thread Sunil Govindan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8197?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil Govindan updated YARN-8197:
-
Attachment: YARN-8197.001.patch

> Tracking URL in the app state does not get redirected to MR ApplicationMaster 
> for Running applications
> --
>
> Key: YARN-8197
> URL: https://issues.apache.org/jira/browse/YARN-8197
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Reporter: Sumana Sathish
>Assignee: Sunil Govindan
>Priority: Critical
> Attachments: YARN-8197.001.patch
>
>
> {code}
> org.eclipse.jetty.servlet.ServletHandler:
> javax.servlet.ServletException: Could not determine the proxy server for 
> redirection
>   at 
> org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter.findRedirectUrl(AmIpFilter.java:211)
>   at 
> org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter.doFilter(AmIpFilter.java:145)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
>   at 
> org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1617)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
>   at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
>   at org.eclipse.jetty.server.Server.handle(Server.java:534)
>   at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320)
>   at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
>   at 
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:283)
>   at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:108)
>   at 
> org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
>   at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
>   at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
>   at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)
>   at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4781) Support intra-queue preemption for fairness ordering policy.

2018-05-24 Thread Sunil Govindan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16489086#comment-16489086
 ] 

Sunil Govindan commented on YARN-4781:
--

Yes. That makes sense to me. Thanks [~eepayne]

If no objections I will commit this tomorrow.

> Support intra-queue preemption for fairness ordering policy.
> 
>
> Key: YARN-4781
> URL: https://issues.apache.org/jira/browse/YARN-4781
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler
>Reporter: Wangda Tan
>Assignee: Eric Payne
>Priority: Major
> Attachments: YARN-4781.001.patch, YARN-4781.002.patch, 
> YARN-4781.003.patch, YARN-4781.004.patch, YARN-4781.005.patch
>
>
> We introduced fairness queue policy since YARN-3319, which will let large 
> applications make progresses and not starve small applications. However, if a 
> large application takes the queue’s resources, and containers of the large 
> app has long lifespan, small applications could still wait for resources for 
> long time and SLAs cannot be guaranteed.
> Instead of wait for application release resources on their own, we need to 
> preempt resources of queue with fairness policy enabled.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8068) Application Priority field causes NPE in app timeline publish when Hadoop 2.7 based clients to 2.8+

2018-05-24 Thread Sunil Govindan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16489548#comment-16489548
 ] 

Sunil Govindan commented on YARN-8068:
--

Thanks [~jlowe]. Yes, we missed it earlier. Thanks for helping in back porting.

> Application Priority field causes NPE in app timeline publish when Hadoop 2.7 
> based clients to 2.8+
> ---
>
> Key: YARN-8068
> URL: https://issues.apache.org/jira/browse/YARN-8068
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Affects Versions: 2.8.3
>Reporter: Sunil Govindan
>Assignee: Sunil Govindan
>Priority: Blocker
> Fix For: 3.1.0, 2.10.0, 2.9.2, 3.0.3
>
> Attachments: YARN-8068.001.patch
>
>
> [TimelineServiceV1Publisher|eclipse-javadoc:%E2%98%82=hadoop-yarn-server-resourcemanager/src%5C/main%5C/java%3Corg.apache.hadoop.yarn.server.resourcemanager.metrics%7BTimelineServiceV1Publisher.java%E2%98%83TimelineServiceV1Publisher].appCreated
>  will cause NPE as we use like below
> {code:java}
> entityInfo.put(ApplicationMetricsConstants.APPLICATION_PRIORITY_INFO, 
> app.getApplicationPriority().getPriority());{code}
> We have to handle this case while recovery.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8319) More YARN pages need to honor yarn.resourcemanager.display.per-user-apps

2018-05-18 Thread Sunil Govindan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil Govindan updated YARN-8319:
-
Attachment: YARN-8319.001.patch

> More YARN pages need to honor yarn.resourcemanager.display.per-user-apps
> 
>
> Key: YARN-8319
> URL: https://issues.apache.org/jira/browse/YARN-8319
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: webapp
>Reporter: Vinod Kumar Vavilapalli
>Assignee: Sunil Govindan
>Priority: Major
> Attachments: YARN-8319.001.patch
>
>
> When this config is on
>  - Per queue page on UI2 should filter app list by user
>  -- TODO: Verify the same with UI1 Per-queue page
>  - ATSv2 with UI2 should filter list of all users' flows and flow activities
>  - Per Node pages
>  -- Listing of apps and containers on a per-node basis should filter apps and 
> containers by user.
> To this end, because this is no longer just for resourcemanager, we should 
> also deprecate {{yarn.resourcemanager.display.per-user-apps}} in favor of 
> {{yarn.webapp.filter-app-list-by-user}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8346) Upgrading to 3.1 kills running containers with error "Opportunistic container queue is full"

2018-05-23 Thread Sunil Govindan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8346?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil Govindan updated YARN-8346:
-
Priority: Blocker  (was: Major)

> Upgrading to 3.1 kills running containers with error "Opportunistic container 
> queue is full"
> 
>
> Key: YARN-8346
> URL: https://issues.apache.org/jira/browse/YARN-8346
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 3.1.0, 3.0.2
>Reporter: Rohith Sharma K S
>Priority: Blocker
>
> It is observed while rolling upgrade from 2.8.4 to 3.1 release, all the 
> running containers are killed and second attempt is launched for that 
> application. The diagnostics message is "Opportunistic container queue is 
> full" which is the reason for container killed. 
> In NM log, I see below logs for after container is recovered.
> {noformat}
> 2018-05-23 17:18:50,655 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.scheduler.ContainerScheduler:
>  Opportunistic container [container_e06_1527075664705_0001_01_01] will 
> not be queued at the NMsince max queue length [0] has been reached
> {noformat}
> Following steps are executed for rolling upgrade
> # Install 2.8.4 cluster and launch a MR job with distributed cache enabled.
> # Stop 2.8.4 RM. Start 3.1.0 RM with same configuration.
> # Stop 2.8.4 NM batch by batch. Start 3.1.0 NM batch by batch. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8346) Upgrading to 3.1 kills running containers with error "Opportunistic container queue is full"

2018-05-23 Thread Sunil Govindan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16487249#comment-16487249
 ] 

Sunil Govindan commented on YARN-8346:
--

bumping up as Blocker.

> Upgrading to 3.1 kills running containers with error "Opportunistic container 
> queue is full"
> 
>
> Key: YARN-8346
> URL: https://issues.apache.org/jira/browse/YARN-8346
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 3.1.0, 3.0.2
>Reporter: Rohith Sharma K S
>Priority: Blocker
>
> It is observed while rolling upgrade from 2.8.4 to 3.1 release, all the 
> running containers are killed and second attempt is launched for that 
> application. The diagnostics message is "Opportunistic container queue is 
> full" which is the reason for container killed. 
> In NM log, I see below logs for after container is recovered.
> {noformat}
> 2018-05-23 17:18:50,655 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.scheduler.ContainerScheduler:
>  Opportunistic container [container_e06_1527075664705_0001_01_01] will 
> not be queued at the NMsince max queue length [0] has been reached
> {noformat}
> Following steps are executed for rolling upgrade
> # Install 2.8.4 cluster and launch a MR job with distributed cache enabled.
> # Stop 2.8.4 RM. Start 3.1.0 RM with same configuration.
> # Stop 2.8.4 NM batch by batch. Start 3.1.0 NM batch by batch. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8346) Upgrading to 3.1 kills running containers with error "Opportunistic container queue is full"

2018-05-23 Thread Sunil Govindan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8346?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil Govindan updated YARN-8346:
-
Affects Version/s: 3.1.0
   3.0.2

> Upgrading to 3.1 kills running containers with error "Opportunistic container 
> queue is full"
> 
>
> Key: YARN-8346
> URL: https://issues.apache.org/jira/browse/YARN-8346
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 3.1.0, 3.0.2
>Reporter: Rohith Sharma K S
>Priority: Blocker
>
> It is observed while rolling upgrade from 2.8.4 to 3.1 release, all the 
> running containers are killed and second attempt is launched for that 
> application. The diagnostics message is "Opportunistic container queue is 
> full" which is the reason for container killed. 
> In NM log, I see below logs for after container is recovered.
> {noformat}
> 2018-05-23 17:18:50,655 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.scheduler.ContainerScheduler:
>  Opportunistic container [container_e06_1527075664705_0001_01_01] will 
> not be queued at the NMsince max queue length [0] has been reached
> {noformat}
> Following steps are executed for rolling upgrade
> # Install 2.8.4 cluster and launch a MR job with distributed cache enabled.
> # Stop 2.8.4 RM. Start 3.1.0 RM with same configuration.
> # Stop 2.8.4 NM batch by batch. Start 3.1.0 NM batch by batch. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8319) More YARN pages need to honor yarn.resourcemanager.display.per-user-apps

2018-05-23 Thread Sunil Govindan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16487446#comment-16487446
 ] 

Sunil Govindan commented on YARN-8319:
--

Test failures are not related. [~rohithsharma] could u pls check

> More YARN pages need to honor yarn.resourcemanager.display.per-user-apps
> 
>
> Key: YARN-8319
> URL: https://issues.apache.org/jira/browse/YARN-8319
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: webapp
>Reporter: Vinod Kumar Vavilapalli
>Assignee: Sunil Govindan
>Priority: Major
> Attachments: YARN-8319.001.patch, YARN-8319.002.patch, 
> YARN-8319.003.patch
>
>
> When this config is on
>  - Per queue page on UI2 should filter app list by user
>  -- TODO: Verify the same with UI1 Per-queue page
>  - ATSv2 with UI2 should filter list of all users' flows and flow activities
>  - Per Node pages
>  -- Listing of apps and containers on a per-node basis should filter apps and 
> containers by user.
> To this end, because this is no longer just for resourcemanager, we should 
> also deprecate {{yarn.resourcemanager.display.per-user-apps}} in favor of 
> {{yarn.webapp.filter-app-list-by-user}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8319) More YARN pages need to honor yarn.resourcemanager.display.per-user-apps

2018-05-20 Thread Sunil Govindan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil Govindan updated YARN-8319:
-
Attachment: YARN-8319.002.patch

> More YARN pages need to honor yarn.resourcemanager.display.per-user-apps
> 
>
> Key: YARN-8319
> URL: https://issues.apache.org/jira/browse/YARN-8319
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: webapp
>Reporter: Vinod Kumar Vavilapalli
>Assignee: Sunil Govindan
>Priority: Major
> Attachments: YARN-8319.001.patch, YARN-8319.002.patch
>
>
> When this config is on
>  - Per queue page on UI2 should filter app list by user
>  -- TODO: Verify the same with UI1 Per-queue page
>  - ATSv2 with UI2 should filter list of all users' flows and flow activities
>  - Per Node pages
>  -- Listing of apps and containers on a per-node basis should filter apps and 
> containers by user.
> To this end, because this is no longer just for resourcemanager, we should 
> also deprecate {{yarn.resourcemanager.display.per-user-apps}} in favor of 
> {{yarn.webapp.filter-app-list-by-user}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8319) More YARN pages need to honor yarn.resourcemanager.display.per-user-apps

2018-05-20 Thread Sunil Govindan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16482201#comment-16482201
 ] 

Sunil Govindan commented on YARN-8319:
--

Updating v2 patch after fixing jenkins.

[~vinodkv] [~rohithsharma] [~leftnoteasy] Kindly help to review.

> More YARN pages need to honor yarn.resourcemanager.display.per-user-apps
> 
>
> Key: YARN-8319
> URL: https://issues.apache.org/jira/browse/YARN-8319
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: webapp
>Reporter: Vinod Kumar Vavilapalli
>Assignee: Sunil Govindan
>Priority: Major
> Attachments: YARN-8319.001.patch, YARN-8319.002.patch
>
>
> When this config is on
>  - Per queue page on UI2 should filter app list by user
>  -- TODO: Verify the same with UI1 Per-queue page
>  - ATSv2 with UI2 should filter list of all users' flows and flow activities
>  - Per Node pages
>  -- Listing of apps and containers on a per-node basis should filter apps and 
> containers by user.
> To this end, because this is no longer just for resourcemanager, we should 
> also deprecate {{yarn.resourcemanager.display.per-user-apps}} in favor of 
> {{yarn.webapp.filter-app-list-by-user}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8399) NodeManager is giving 403 GSS exception post upgrade to 3.1 in secure mode

2018-06-06 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16503292#comment-16503292
 ] 

Sunil Govindan commented on YARN-8399:
--

Thanks [~rohithsharma] and [~vinodkv] for the comments.

Updated this change in AuxServices. We need all these 3 changes as we call 
*setConf* while doing ReflectionUtils.newInstance.

Please help to review.

> NodeManager is giving 403 GSS exception post upgrade to 3.1 in secure mode
> --
>
> Key: YARN-8399
> URL: https://issues.apache.org/jira/browse/YARN-8399
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: timelineservice
>Reporter: Sunil Govindan
>Assignee: Sunil Govindan
>Priority: Major
> Attachments: YARN-8399.001.patch, YARN-8399.002.patch
>
>
> Getting 403 GSS exception while accessing NM http port via curl. 
> {code:java}
> curl -k -i --negotiate -u: https://:/node
> HTTP/1.1 401 Authentication required
> Date: Tue, 05 Jun 2018 17:59:00 GMT
> Date: Tue, 05 Jun 2018 17:59:00 GMT
> Pragma: no-cache
> WWW-Authenticate: Negotiate
> Set-Cookie: hadoop.auth=; Path=/; Secure; HttpOnly
> Cache-Control: must-revalidate,no-cache,no-store
> Content-Type: text/html;charset=iso-8859-1
> Content-Length: 264
> HTTP/1.1 403 GSSException: Failure unspecified at GSS-API level (Mechanism 
> level: Request is a replay (34)){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8399) NodeManager is giving 403 GSS exception post upgrade to 3.1 in secure mode

2018-06-06 Thread Sunil Govindan (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8399?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil Govindan updated YARN-8399:
-
Attachment: YARN-8399.002.patch

> NodeManager is giving 403 GSS exception post upgrade to 3.1 in secure mode
> --
>
> Key: YARN-8399
> URL: https://issues.apache.org/jira/browse/YARN-8399
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: timelineservice
>Reporter: Sunil Govindan
>Assignee: Sunil Govindan
>Priority: Major
> Attachments: YARN-8399.001.patch, YARN-8399.002.patch
>
>
> Getting 403 GSS exception while accessing NM http port via curl. 
> {code:java}
> curl -k -i --negotiate -u: https://:/node
> HTTP/1.1 401 Authentication required
> Date: Tue, 05 Jun 2018 17:59:00 GMT
> Date: Tue, 05 Jun 2018 17:59:00 GMT
> Pragma: no-cache
> WWW-Authenticate: Negotiate
> Set-Cookie: hadoop.auth=; Path=/; Secure; HttpOnly
> Cache-Control: must-revalidate,no-cache,no-store
> Content-Type: text/html;charset=iso-8859-1
> Content-Length: 264
> HTTP/1.1 403 GSSException: Failure unspecified at GSS-API level (Mechanism 
> level: Request is a replay (34)){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8419) In "New Service" section of new YARN UI, user cannot submit service as Submit button is always disabled.

2018-06-11 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16509195#comment-16509195
 ] 

Sunil Govindan commented on YARN-8419:
--

Thanks [~suma.shivaprasad]. Patch looks straight forward. Pending jenkins

> In "New Service" section of new YARN UI, user cannot submit service as Submit 
> button is always disabled.
> 
>
> Key: YARN-8419
> URL: https://issues.apache.org/jira/browse/YARN-8419
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
>Priority: Major
> Attachments: YARN-8419.1.patch
>
>
> This is because, user.name check is still mandatory for non-secure cluster. 
> But in secure cluster, user.name is not exposed to UI and hence was showing 
> this error in secure cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-8419) In "New Service" section of new YARN UI, user cannot submit service as Submit button is always disabled.

2018-06-11 Thread Sunil Govindan (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8419?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil Govindan reassigned YARN-8419:


Assignee: Suma Shivaprasad

> In "New Service" section of new YARN UI, user cannot submit service as Submit 
> button is always disabled.
> 
>
> Key: YARN-8419
> URL: https://issues.apache.org/jira/browse/YARN-8419
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
>Priority: Major
> Attachments: YARN-8419.1.patch
>
>
> This is because, user.name check is still mandatory for non-secure cluster. 
> But in secure cluster, user.name is not exposed to UI and hence was showing 
> this error in secure cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8413) Flow activity page is failing with "Timeline server failed with an error"

2018-06-11 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16509154#comment-16509154
 ] 

Sunil Govindan commented on YARN-8413:
--

Setting async to false means that the statement you are calling has to complete 
before the next statement in your function can be called. If you set async: 
true then that statement will begin it's execution and the next statement will 
be called regardless of whether the async statement has completed yet.

The statement we look here is to be sync. Hence marking it as false. Attached 
v1 patch.

[~rohithsharma] please help to review.

> Flow activity page is failing with "Timeline server failed with an error"
> -
>
> Key: YARN-8413
> URL: https://issues.apache.org/jira/browse/YARN-8413
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Affects Versions: 3.1.1
>Reporter: Yesha Vora
>Assignee: Sunil Govindan
>Priority: Major
> Attachments: YARN-8413.001.patch
>
>
> Flow activity page is fail to load with "Timeline server failed with an error"
> This page uses incorrect flow call 
> "https://localhost:8188/ws/v2/timeline/flows?_=1528755339836; and it is 
> failing to load.
> 1) Its using localhost instead ATS v2 hostname
> 2) Its using ATS v1.5 http port instead ATS v2 https port
> The correct rest call is "https://: port>/ws/v2/timeline/flows?_=1528755339836"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8404) RM Event dispatcher is blocked if ATS1/1.5 server is not running.

2018-06-11 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16509149#comment-16509149
 ] 

Sunil Govindan commented on YARN-8404:
--

I offline synced up with [~rohithsharma]. Moving async definitely will  avoid 
any potential Async Dispatcher block. This is more important as of now and we 
can go ahead with this patch for now. Will open another Jira to see how to 
tackle appFinished event missing scenario.

I will commit this by end of day if there are no objections.

> RM Event dispatcher is blocked if ATS1/1.5 server is not running. 
> --
>
> Key: YARN-8404
> URL: https://issues.apache.org/jira/browse/YARN-8404
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.1.0, 3.0.2
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Blocker
> Attachments: YARN-8404.01.patch
>
>
> It is observed that if ATS1/1.5 daemon is not running, RM recovery is delayed 
> as long as timeline client get timed out for each applications. By default, 
> timed out will take around 5 mins. If completed applications are more then 
> amount of time RM will wait is *(number of completed applications in a 
> cluster * 5 minutes)* which is kind of hanged. 
> Primary reason for this behavior is YARN-3044 YARN-4129 which refactor 
> existing system metric publisher. This refactoring made appFinished event as 
> synchronous which was asynchronous earlier. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8413) Flow activity page is failing with "Timeline server failed with an error"

2018-06-11 Thread Sunil Govindan (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil Govindan updated YARN-8413:
-
Attachment: YARN-8413.001.patch

> Flow activity page is failing with "Timeline server failed with an error"
> -
>
> Key: YARN-8413
> URL: https://issues.apache.org/jira/browse/YARN-8413
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Affects Versions: 3.1.1
>Reporter: Yesha Vora
>Assignee: Sunil Govindan
>Priority: Major
> Attachments: YARN-8413.001.patch
>
>
> Flow activity page is fail to load with "Timeline server failed with an error"
> This page uses incorrect flow call 
> "https://localhost:8188/ws/v2/timeline/flows?_=1528755339836; and it is 
> failing to load.
> 1) Its using localhost instead ATS v2 hostname
> 2) Its using ATS v1.5 http port instead ATS v2 https port
> The correct rest call is "https://: port>/ws/v2/timeline/flows?_=1528755339836"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-8413) Flow activity page is failing with "Timeline server failed with an error"

2018-06-11 Thread Sunil Govindan (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil Govindan reassigned YARN-8413:


Assignee: Sunil Govindan

> Flow activity page is failing with "Timeline server failed with an error"
> -
>
> Key: YARN-8413
> URL: https://issues.apache.org/jira/browse/YARN-8413
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Affects Versions: 3.1.1
>Reporter: Yesha Vora
>Assignee: Sunil Govindan
>Priority: Major
> Attachments: YARN-8413.001.patch
>
>
> Flow activity page is fail to load with "Timeline server failed with an error"
> This page uses incorrect flow call 
> "https://localhost:8188/ws/v2/timeline/flows?_=1528755339836; and it is 
> failing to load.
> 1) Its using localhost instead ATS v2 hostname
> 2) Its using ATS v1.5 http port instead ATS v2 https port
> The correct rest call is "https://: port>/ws/v2/timeline/flows?_=1528755339836"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8419) [UI2] User cannot submit a new service as submit button is always disabled

2018-06-12 Thread Sunil Govindan (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8419?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil Govindan updated YARN-8419:
-
Fix Version/s: 3.1.1
   3.2.0

> [UI2] User cannot submit a new service as submit button is always disabled
> --
>
> Key: YARN-8419
> URL: https://issues.apache.org/jira/browse/YARN-8419
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
>Priority: Major
> Fix For: 3.2.0, 3.1.1
>
> Attachments: YARN-8419.1.patch
>
>
> This is because, user.name check is still mandatory for non-secure cluster. 
> But in secure cluster, user.name is not exposed to UI and hence was showing 
> this error in secure cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8415) TimelineWebServices.getEntity should throw a ForbiddenException(403) instead of 404 when ACL checks fail

2018-06-12 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8415?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16509288#comment-16509288
 ] 

Sunil Govindan commented on YARN-8415:
--

Thanks [~suma.shivaprasad] 

Overall patch looks fine. Few minor nits

1.  {{throw new YarnException (callerUGI, }} Instead of printing callerUGI, its 
better to print callerUGI.getShortUserName(). Other wise it will be verbose.

2. Could we add a test for this?

> TimelineWebServices.getEntity should throw a ForbiddenException(403) instead 
> of 404 when ACL checks fail
> 
>
> Key: YARN-8415
> URL: https://issues.apache.org/jira/browse/YARN-8415
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
>Priority: Major
> Attachments: YARN-8415.1.patch, YARN-8415.2.patch
>
>
> {noformat}
> private TimelineEntity doGetEntity(
>   String entityType,
>   String entityId,
>   EnumSet fields,
>   UserGroupInformation callerUGI) throws YarnException, IOException {
> TimelineEntity entity = null;
> entity =
> store.getEntity(entityId, entityType, fields);
> if (entity != null) {
>   addDefaultDomainIdIfAbsent(entity);
>   // check ACLs
>   if (!timelineACLsManager.checkAccess(
>   callerUGI, ApplicationAccessType.VIEW_APP, entity)) {
>   entity = null;   //Should differentiate from an entity get failure 
> vs ACL check failure here by throwing an Exception.*
>   }
> }
> return entity;
>   }
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8419) In "New Service" section of new YARN UI, user cannot submit service as Submit button is always disabled.

2018-06-12 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16509546#comment-16509546
 ] 

Sunil Govindan commented on YARN-8419:
--

Committing shortly.

> In "New Service" section of new YARN UI, user cannot submit service as Submit 
> button is always disabled.
> 
>
> Key: YARN-8419
> URL: https://issues.apache.org/jira/browse/YARN-8419
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
>Priority: Major
> Attachments: YARN-8419.1.patch
>
>
> This is because, user.name check is still mandatory for non-secure cluster. 
> But in secure cluster, user.name is not exposed to UI and hence was showing 
> this error in secure cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8258) YARN webappcontext for UI2 should inherit all filters from default context

2018-06-12 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16509791#comment-16509791
 ] 

Sunil Govindan commented on YARN-8258:
--

I am not sure YARN-8108 covers all cases. If 
{{yarn.resourcemanager.webapp.delegation-token-auth-filter.enabled}} is false, 
RMAuthenticationFilterInitializer wont be invoked. Rather 
AuthenticationFilterInitializer will be invoked. 

> YARN webappcontext for UI2 should inherit all filters from default context
> --
>
> Key: YARN-8258
> URL: https://issues.apache.org/jira/browse/YARN-8258
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: webapp
>Reporter: Sumana Sathish
>Assignee: Sunil Govindan
>Priority: Major
> Attachments: YARN-8258.001.patch, YARN-8258.002.patch, 
> YARN-8258.003.patch, YARN-8258.004.patch, YARN-8258.005.patch, 
> YARN-8258.006.patch, YARN-8258.007.patch
>
>
> Thanks [~ssath...@hortonworks.com] for finding this.
> Ideally all filters from default context has to be inherited to UI2 context 
> as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8386) App log can not be viewed from Logs tab in secure cluster

2018-06-07 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16504907#comment-16504907
 ] 

Sunil Govindan commented on YARN-8386:
--

[~rohithsharma] There are few jiras to go into 3.0. Other wise its big change 
for cherrypick.

I ll identify those and ll cherrypick one by one. For now, we can close this.

>  App log can not be viewed from Logs tab in secure cluster
> --
>
> Key: YARN-8386
> URL: https://issues.apache.org/jira/browse/YARN-8386
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Affects Versions: 3.1.0
>Reporter: Yesha Vora
>Assignee: Sunil Govindan
>Priority: Critical
> Fix For: 3.2.0, 3.1.1
>
> Attachments: YARN-8386.001.patch, YARN-8386.002.patch
>
>
> App Logs can not be viewed from UI2 logs tab.
> Steps:
> 1) Launch yarn service 
> 2) Let application finish and go to Logs tab to view AM log
> Here, service am api is failing with 401 authentication error.
> {code}
> Request URL: 
> http://xxx:8188/ws/v1/applicationhistory/containers/container_e09_1527737134553_0034_01_01/logs/serviceam.log?_=1527799590942
> Request Method: GET
> Status Code: 401 Authentication required
>  Response 
> html>
> 
> 
> Error 401 Authentication required
> 
> HTTP ERROR 401
> Problem accessing 
> /ws/v1/applicationhistory/containers/container_e09_1527737134553_0034_01_01/logs/serviceam.log.
>  Reason:
> Authentication required
> 
>  {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8426) Upgrade jquery-ui to 1.12.1 in YARN

2018-06-14 Thread Sunil Govindan (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8426?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil Govindan updated YARN-8426:
-
Attachment: YARN-8426.001.patch

> Upgrade jquery-ui to 1.12.1 in YARN
> ---
>
> Key: YARN-8426
> URL: https://issues.apache.org/jira/browse/YARN-8426
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: webapp
>Reporter: Sunil Govindan
>Assignee: Sunil Govindan
>Priority: Major
> Attachments: YARN-8426.001.patch
>
>
> In align to HADOOP-15483, upgrade jquery-ui for YARN common package.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-8426) Upgrade jquery-ui to 1.12.1 in YARN

2018-06-14 Thread Sunil Govindan (JIRA)
Sunil Govindan created YARN-8426:


 Summary: Upgrade jquery-ui to 1.12.1 in YARN
 Key: YARN-8426
 URL: https://issues.apache.org/jira/browse/YARN-8426
 Project: Hadoop YARN
  Issue Type: Bug
  Components: webapp
Reporter: Sunil Govindan
Assignee: Sunil Govindan


In align to HADOOP-15483, upgrade jquery-ui for YARN common package.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8426) Upgrade jquery-ui to 1.12.1 in YARN

2018-06-14 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8426?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16512219#comment-16512219
 ] 

Sunil Govindan commented on YARN-8426:
--

With this patch, jquery-ui is updated to 1.12.1. tested old RM ui and its works 
fine.

> Upgrade jquery-ui to 1.12.1 in YARN
> ---
>
> Key: YARN-8426
> URL: https://issues.apache.org/jira/browse/YARN-8426
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: webapp
>Reporter: Sunil Govindan
>Assignee: Sunil Govindan
>Priority: Major
> Attachments: YARN-8426.001.patch
>
>
> In align to HADOOP-15483, upgrade jquery-ui for YARN common package.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8426) Upgrade jquery-ui to 1.12.1 in YARN

2018-06-14 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8426?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16512216#comment-16512216
 ] 

Sunil Govindan commented on YARN-8426:
--

[~msingh] [~jnp] Could u pls help to review the patch.

cc [~vinodkv] [~leftnoteasy]

> Upgrade jquery-ui to 1.12.1 in YARN
> ---
>
> Key: YARN-8426
> URL: https://issues.apache.org/jira/browse/YARN-8426
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: webapp
>Reporter: Sunil Govindan
>Assignee: Sunil Govindan
>Priority: Major
> Attachments: YARN-8426.001.patch
>
>
> In align to HADOOP-15483, upgrade jquery-ui for YARN common package.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8404) RM Event dispatcher is blocked if ATS1/1.5 server is not running.

2018-06-13 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16510845#comment-16510845
 ] 

Sunil Govindan commented on YARN-8404:
--

Committing shortly.

> RM Event dispatcher is blocked if ATS1/1.5 server is not running. 
> --
>
> Key: YARN-8404
> URL: https://issues.apache.org/jira/browse/YARN-8404
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.1.0, 3.0.2
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Blocker
> Attachments: YARN-8404.01.patch
>
>
> It is observed that if ATS1/1.5 daemon is not running, RM recovery is delayed 
> as long as timeline client get timed out for each applications. By default, 
> timed out will take around 5 mins. If completed applications are more then 
> amount of time RM will wait is *(number of completed applications in a 
> cluster * 5 minutes)* which is kind of hanged. 
> Primary reason for this behavior is YARN-3044 YARN-4129 which refactor 
> existing system metric publisher. This refactoring made appFinished event as 
> synchronous which was asynchronous earlier. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8415) TimelineWebServices.getEntity should throw a ForbiddenException(403) instead of 404 when ACL checks fail

2018-06-12 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8415?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16509989#comment-16509989
 ] 

Sunil Govindan commented on YARN-8415:
--

Thanks [~suma.shivaprasad]. Patch looks fine.

Pending jenkins.

> TimelineWebServices.getEntity should throw a ForbiddenException(403) instead 
> of 404 when ACL checks fail
> 
>
> Key: YARN-8415
> URL: https://issues.apache.org/jira/browse/YARN-8415
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
>Priority: Major
> Attachments: YARN-8415.1.patch, YARN-8415.2.patch, YARN-8415.3.patch
>
>
> {noformat}
> private TimelineEntity doGetEntity(
>   String entityType,
>   String entityId,
>   EnumSet fields,
>   UserGroupInformation callerUGI) throws YarnException, IOException {
> TimelineEntity entity = null;
> entity =
> store.getEntity(entityId, entityType, fields);
> if (entity != null) {
>   addDefaultDomainIdIfAbsent(entity);
>   // check ACLs
>   if (!timelineACLsManager.checkAccess(
>   callerUGI, ApplicationAccessType.VIEW_APP, entity)) {
>   entity = null;   //Should differentiate from an entity get failure 
> vs ACL check failure here by throwing an Exception.*
>   }
> }
> return entity;
>   }
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8258) YARN webappcontext for UI2 should inherit all filters from default context

2018-06-13 Thread Sunil Govindan (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil Govindan updated YARN-8258:
-
Attachment: YARN-8258.008.patch

> YARN webappcontext for UI2 should inherit all filters from default context
> --
>
> Key: YARN-8258
> URL: https://issues.apache.org/jira/browse/YARN-8258
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: webapp
>Reporter: Sumana Sathish
>Assignee: Sunil Govindan
>Priority: Major
> Attachments: YARN-8258.001.patch, YARN-8258.002.patch, 
> YARN-8258.003.patch, YARN-8258.004.patch, YARN-8258.005.patch, 
> YARN-8258.006.patch, YARN-8258.007.patch, YARN-8258.008.patch
>
>
> Thanks [~ssath...@hortonworks.com] for finding this.
> Ideally all filters from default context has to be inherited to UI2 context 
> as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8404) Timeline event publish need to be async to avoid Dispatcher thread leak in case ATS is down

2018-06-13 Thread Sunil Govindan (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8404?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil Govindan updated YARN-8404:
-
Summary: Timeline event publish need to be async to avoid Dispatcher thread 
leak in case ATS is down  (was: RM Event dispatcher is blocked if ATS1/1.5 
server is not running. )

> Timeline event publish need to be async to avoid Dispatcher thread leak in 
> case ATS is down
> ---
>
> Key: YARN-8404
> URL: https://issues.apache.org/jira/browse/YARN-8404
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.1.0, 3.0.2
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Blocker
> Attachments: YARN-8404.01.patch
>
>
> It is observed that if ATS1/1.5 daemon is not running, RM recovery is delayed 
> as long as timeline client get timed out for each applications. By default, 
> timed out will take around 5 mins. If completed applications are more then 
> amount of time RM will wait is *(number of completed applications in a 
> cluster * 5 minutes)* which is kind of hanged. 
> Primary reason for this behavior is YARN-3044 YARN-4129 which refactor 
> existing system metric publisher. This refactoring made appFinished event as 
> synchronous which was asynchronous earlier. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8258) YARN webappcontext for UI2 should inherit all filters from default context

2018-06-13 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16511007#comment-16511007
 ] 

Sunil Govindan commented on YARN-8258:
--

HADOOP-15518 and YARN-8108 are majorly fixing a request replay error while 2 or 
more AuthenticationFilter processing happens. And this needs more testing and 
discussion.

 Meantime, UI2 launch context issue could be fixed in another way.

UI2 redirection in kerberized cluster was failing due to second redirection 
coming from jetty side itself for index.html. This redirection could be forced 
to come from client itself after making a WebAppContext config change. Thus 
request replay error could be avoided. And with minor fix in UI2 code, 
index.html could be avoided as well.

Since this is tested in SSO and Kerberized cluster, I feel this is a good 
solution for now. Once HADOOP-15518 and YARN-810r decisions are finalized, we 
can disable this redirection config.

cc [~vinodkv] pls help to review the latest change.

> YARN webappcontext for UI2 should inherit all filters from default context
> --
>
> Key: YARN-8258
> URL: https://issues.apache.org/jira/browse/YARN-8258
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: webapp
>Reporter: Sumana Sathish
>Assignee: Sunil Govindan
>Priority: Major
> Attachments: YARN-8258.001.patch, YARN-8258.002.patch, 
> YARN-8258.003.patch, YARN-8258.004.patch, YARN-8258.005.patch, 
> YARN-8258.006.patch, YARN-8258.007.patch, YARN-8258.008.patch
>
>
> Thanks [~ssath...@hortonworks.com] for finding this.
> Ideally all filters from default context has to be inherited to UI2 context 
> as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8258) YARN webappcontext for UI2 should inherit all filters from default context

2018-06-13 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16511380#comment-16511380
 ] 

Sunil Govindan commented on YARN-8258:
--

Find bugs warnings are on trunk and shaded client error also in trunk.
I ll raise another Jira to handle these separately.

 

> YARN webappcontext for UI2 should inherit all filters from default context
> --
>
> Key: YARN-8258
> URL: https://issues.apache.org/jira/browse/YARN-8258
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: webapp
>Reporter: Sumana Sathish
>Assignee: Sunil Govindan
>Priority: Major
> Attachments: YARN-8258.001.patch, YARN-8258.002.patch, 
> YARN-8258.003.patch, YARN-8258.004.patch, YARN-8258.005.patch, 
> YARN-8258.006.patch, YARN-8258.007.patch, YARN-8258.008.patch
>
>
> Thanks [~ssath...@hortonworks.com] for finding this.
> Ideally all filters from default context has to be inherited to UI2 context 
> as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8421) when moving app, activeUsers is increased, even though app does not have outstanding request

2018-06-12 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16509608#comment-16509608
 ] 

Sunil Govindan commented on YARN-8421:
--

This change makes sense. Could you please add unit test to validate this and to 
verify the patch fixes it.

3 cases to look here
 # Active user count has to decrease from old queue
 # Increase the active user count in target queue if the moved app has pending 
requests 
 # Active user count in target queue stays same if the moved app has NO pending 
requests 

> when moving app, activeUsers is increased, even though app does not have 
> outstanding request 
> -
>
> Key: YARN-8421
> URL: https://issues.apache.org/jira/browse/YARN-8421
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.8.4
>Reporter: kyungwan nam
>Priority: Major
> Attachments: YARN-8421.001.patch
>
>
> all containers for app1 have been allocated.
> move app1 from default Queue to test Queue as follows.
> {code}
>   yarn rmadmin application -movetoqueue app1 -queue test
> {code}
> _activeUsers_ of the test Queue is increased even though app1 which does not 
> have outstanding request.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8421) when moving app, activeUsers is increased, even though app does not have outstanding request

2018-06-12 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16509611#comment-16509611
 ] 

Sunil Govindan commented on YARN-8421:
--

cc [~maniraj...@gmail.com] who was also mentioning this issue in another ticket.

> when moving app, activeUsers is increased, even though app does not have 
> outstanding request 
> -
>
> Key: YARN-8421
> URL: https://issues.apache.org/jira/browse/YARN-8421
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.8.4
>Reporter: kyungwan nam
>Priority: Major
> Attachments: YARN-8421.001.patch
>
>
> all containers for app1 have been allocated.
> move app1 from default Queue to test Queue as follows.
> {code}
>   yarn rmadmin application -movetoqueue app1 -queue test
> {code}
> _activeUsers_ of the test Queue is increased even though app1 which does not 
> have outstanding request.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8423) GPU does not get released even though the application gets killed.

2018-06-14 Thread Sunil Govindan (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil Govindan updated YARN-8423:
-
Attachment: YARN-8423.001.patch

> GPU does not get released even though the application gets killed.
> --
>
> Key: YARN-8423
> URL: https://issues.apache.org/jira/browse/YARN-8423
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Reporter: Sumana Sathish
>Assignee: Sunil Govindan
>Priority: Critical
> Attachments: YARN-8423.001.patch, kill-container-nm.log
>
>
> Run an Tensor flow app requesting one GPU.
> Kill the application once the GPU is allocated
> Query the nodemanger once the application is killed.We see that GPU is not 
> being released.
> {code}
>  curl -i /ws/v1/node/resources/yarn.io%2Fgpu
> {"gpuDeviceInformation":{"gpus":[{"productName":"","uuid":"GPU-","minorNumber":0,"gpuUtilizations":{"overallGpuUtilization":0.0},"gpuMemoryUsage":{"usedMemoryMiB":73,"availMemoryMiB":12125,"totalMemoryMiB":12198},"temperature":{"currentGpuTemp":28.0,"maxGpuTemp":85.0,"slowThresholdGpuTemp":82.0}},{"productName":"","uuid":"GPU-","minorNumber":1,"gpuUtilizations":{"overallGpuUtilization":0.0},"gpuMemoryUsage":{"usedMemoryMiB":73,"availMemoryMiB":12125,"totalMemoryMiB":12198},"temperature":{"currentGpuTemp":28.0,"maxGpuTemp":85.0,"slowThresholdGpuTemp":82.0}}],"driverVersion":""},"totalGpuDevices":[{"index":0,"minorNumber":0},{"index":1,"minorNumber":1}],"assignedGpuDevices":[{"index":0,"minorNumber":0,"containerId":"container_"}]}
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8423) GPU does not get released even though the application gets killed.

2018-06-14 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16512890#comment-16512890
 ] 

Sunil Govindan commented on YARN-8423:
--

Thanks [~leftnoteasy] for the detailed analysis.

Attaching an initial patch.

> GPU does not get released even though the application gets killed.
> --
>
> Key: YARN-8423
> URL: https://issues.apache.org/jira/browse/YARN-8423
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Reporter: Sumana Sathish
>Assignee: Sunil Govindan
>Priority: Critical
> Attachments: YARN-8423.001.patch, kill-container-nm.log
>
>
> Run an Tensor flow app requesting one GPU.
> Kill the application once the GPU is allocated
> Query the nodemanger once the application is killed.We see that GPU is not 
> being released.
> {code}
>  curl -i /ws/v1/node/resources/yarn.io%2Fgpu
> {"gpuDeviceInformation":{"gpus":[{"productName":"","uuid":"GPU-","minorNumber":0,"gpuUtilizations":{"overallGpuUtilization":0.0},"gpuMemoryUsage":{"usedMemoryMiB":73,"availMemoryMiB":12125,"totalMemoryMiB":12198},"temperature":{"currentGpuTemp":28.0,"maxGpuTemp":85.0,"slowThresholdGpuTemp":82.0}},{"productName":"","uuid":"GPU-","minorNumber":1,"gpuUtilizations":{"overallGpuUtilization":0.0},"gpuMemoryUsage":{"usedMemoryMiB":73,"availMemoryMiB":12125,"totalMemoryMiB":12198},"temperature":{"currentGpuTemp":28.0,"maxGpuTemp":85.0,"slowThresholdGpuTemp":82.0}}],"driverVersion":""},"totalGpuDevices":[{"index":0,"minorNumber":0},{"index":1,"minorNumber":1}],"assignedGpuDevices":[{"index":0,"minorNumber":0,"containerId":"container_"}]}
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8404) RM Event dispatcher is blocked if ATS1/1.5 server is not running.

2018-06-10 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16507645#comment-16507645
 ] 

Sunil Govindan commented on YARN-8404:
--

Thanks [~rohithsharma] for the patch.

I think one of the reason to make appFinished sync is NOT to loose the event to 
publish to ATS. Statestore and ATS will get updated same time. Though this 
approach seems fine, i think there is more risk of exposing a sync event to be 
called from main Dispatcher. If ATS is down, this will block the dispatcher 
thread. And a n/w delay or something similar will cause even OOM from RM.

Hence I think its a tradeoff of loosing an event at times, however for time 
being its better to keep it async till a better cache or similar approach can 
be brought in to save the finish event publish.

Current approach in the patch seems fine to me. I will wait for others to 
review the same.

> RM Event dispatcher is blocked if ATS1/1.5 server is not running. 
> --
>
> Key: YARN-8404
> URL: https://issues.apache.org/jira/browse/YARN-8404
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.1.0, 3.0.2
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Blocker
> Attachments: YARN-8404.01.patch
>
>
> It is observed that if ATS1/1.5 daemon is not running, RM recovery is delayed 
> as long as timeline client get timed out for each applications. By default, 
> timed out will take around 5 mins. If completed applications are more then 
> amount of time RM will wait is *(number of completed applications in a 
> cluster * 5 minutes)* which is kind of hanged. 
> Primary reason for this behavior is YARN-3044 YARN-4129 which refactor 
> existing system metric publisher. This refactoring made appFinished event as 
> synchronous which was asynchronous earlier. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8258) YARN webappcontext for UI2 should inherit all filters from default context

2018-06-14 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16513275#comment-16513275
 ] 

Sunil Govindan commented on YARN-8258:
--

I have checked in chrome and firefox now. In both, pages are loading fine.

I ll try with Safari as u mentioned. 

> YARN webappcontext for UI2 should inherit all filters from default context
> --
>
> Key: YARN-8258
> URL: https://issues.apache.org/jira/browse/YARN-8258
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: webapp
>Reporter: Sumana Sathish
>Assignee: Sunil Govindan
>Priority: Major
> Attachments: YARN-8258.001.patch, YARN-8258.002.patch, 
> YARN-8258.003.patch, YARN-8258.004.patch, YARN-8258.005.patch, 
> YARN-8258.006.patch, YARN-8258.007.patch, YARN-8258.008.patch
>
>
> Thanks [~ssath...@hortonworks.com] for finding this.
> Ideally all filters from default context has to be inherited to UI2 context 
> as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8258) YARN webappcontext for UI2 should inherit all filters from default context

2018-06-13 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16511909#comment-16511909
 ] 

Sunil Govindan commented on YARN-8258:
--

configs.env is a static file. This is loaded from deployed webapps folder of 
UI2. This config is mainly used for development purpose, for a release mode, 
this config is not used.

> YARN webappcontext for UI2 should inherit all filters from default context
> --
>
> Key: YARN-8258
> URL: https://issues.apache.org/jira/browse/YARN-8258
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: webapp
>Reporter: Sumana Sathish
>Assignee: Sunil Govindan
>Priority: Major
> Attachments: YARN-8258.001.patch, YARN-8258.002.patch, 
> YARN-8258.003.patch, YARN-8258.004.patch, YARN-8258.005.patch, 
> YARN-8258.006.patch, YARN-8258.007.patch, YARN-8258.008.patch
>
>
> Thanks [~ssath...@hortonworks.com] for finding this.
> Ideally all filters from default context has to be inherited to UI2 context 
> as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4781) Support intra-queue preemption for fairness ordering policy.

2018-05-30 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-4781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16495427#comment-16495427
 ] 

Sunil Govindan commented on YARN-4781:
--

Thank you. I ll commit this to branch-2 shortly.

> Support intra-queue preemption for fairness ordering policy.
> 
>
> Key: YARN-4781
> URL: https://issues.apache.org/jira/browse/YARN-4781
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler
>Reporter: Wangda Tan
>Assignee: Eric Payne
>Priority: Major
> Fix For: 3.0.3
>
> Attachments: YARN-4781.001.patch, YARN-4781.002.patch, 
> YARN-4781.003.patch, YARN-4781.004.patch, YARN-4781.005.branch-2.patch, 
> YARN-4781.005.patch
>
>
> We introduced fairness queue policy since YARN-3319, which will let large 
> applications make progresses and not starve small applications. However, if a 
> large application takes the queue’s resources, and containers of the large 
> app has long lifespan, small applications could still wait for resources for 
> long time and SLAs cannot be guaranteed.
> Instead of wait for application release resources on their own, we need to 
> preempt resources of queue with fairness policy enabled.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8276) [UI2] After version field became mandatory, form-based submission of new YARN service through UI2 doesn't work

2018-05-31 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8276?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16496263#comment-16496263
 ] 

Sunil Govindan commented on YARN-8276:
--

Yes. Changes looks fine to me.

I ll commit this shortly.

> [UI2] After version field became mandatory, form-based submission of new YARN 
> service through UI2 doesn't work
> --
>
> Key: YARN-8276
> URL: https://issues.apache.org/jira/browse/YARN-8276
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Reporter: Gergely Novák
>Assignee: Gergely Novák
>Priority: Critical
> Attachments: YARN-8276.001.patch
>
>
> After version became mandatory in YARN service, one cannot create a new 
> service through UI, there is no way to specify the version field and the 
> service fails with the following message:
> {code}
> "Error: Adapter operation failed". 
> {code}
> Checking through browser dev tools, the REST response is the following:
> {code}
> {"diagnostics":"Version of service sleeper-service is either empty or not 
> provided"}
> {code}
> Discovered by [~vinodkv].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8197) Tracking URL in the app state does not get redirected to MR ApplicationMaster for Running applications

2018-05-31 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16496404#comment-16496404
 ] 

Sunil Govindan commented on YARN-8197:
--

Thanks [~vinodkv] [~eyang]

Updating latest patch addressing the comments.

> Tracking URL in the app state does not get redirected to MR ApplicationMaster 
> for Running applications
> --
>
> Key: YARN-8197
> URL: https://issues.apache.org/jira/browse/YARN-8197
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Reporter: Sumana Sathish
>Assignee: Sunil Govindan
>Priority: Critical
> Attachments: YARN-8197.001.patch, YARN-8197.002.patch, 
> YARN-8197.003.patch
>
>
> {code}
> org.eclipse.jetty.servlet.ServletHandler:
> javax.servlet.ServletException: Could not determine the proxy server for 
> redirection
>   at 
> org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter.findRedirectUrl(AmIpFilter.java:211)
>   at 
> org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter.doFilter(AmIpFilter.java:145)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
>   at 
> org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1617)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
>   at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
>   at org.eclipse.jetty.server.Server.handle(Server.java:534)
>   at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320)
>   at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
>   at 
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:283)
>   at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:108)
>   at 
> org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
>   at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
>   at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
>   at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)
>   at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8197) Tracking URL in the app state does not get redirected to MR ApplicationMaster for Running applications

2018-05-31 Thread Sunil Govindan (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8197?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil Govindan updated YARN-8197:
-
Attachment: YARN-8197.003.patch

> Tracking URL in the app state does not get redirected to MR ApplicationMaster 
> for Running applications
> --
>
> Key: YARN-8197
> URL: https://issues.apache.org/jira/browse/YARN-8197
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Reporter: Sumana Sathish
>Assignee: Sunil Govindan
>Priority: Critical
> Attachments: YARN-8197.001.patch, YARN-8197.002.patch, 
> YARN-8197.003.patch
>
>
> {code}
> org.eclipse.jetty.servlet.ServletHandler:
> javax.servlet.ServletException: Could not determine the proxy server for 
> redirection
>   at 
> org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter.findRedirectUrl(AmIpFilter.java:211)
>   at 
> org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter.doFilter(AmIpFilter.java:145)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
>   at 
> org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1617)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
>   at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
>   at org.eclipse.jetty.server.Server.handle(Server.java:534)
>   at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320)
>   at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
>   at 
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:283)
>   at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:108)
>   at 
> org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
>   at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
>   at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
>   at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)
>   at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8258) [UI2] New UI webappcontext should inherit all filters from default context

2018-05-31 Thread Sunil Govindan (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil Govindan updated YARN-8258:
-
Attachment: YARN-8258.002.patch

> [UI2] New UI webappcontext should inherit all filters from default context
> --
>
> Key: YARN-8258
> URL: https://issues.apache.org/jira/browse/YARN-8258
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: webapp
>Reporter: Sumana Sathish
>Assignee: Sunil Govindan
>Priority: Major
> Attachments: YARN-8258.001.patch, YARN-8258.002.patch
>
>
> Thanks [~ssath...@hortonworks.com] for finding this.
> Ideally all filters from default context has to be inherited to UI2 context 
> as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8258) YARN webappcontext for UI2 should inherit all filters from default context

2018-05-31 Thread Sunil Govindan (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil Govindan updated YARN-8258:
-
Summary: YARN webappcontext for UI2 should inherit all filters from default 
context  (was: [UI2] New UI webappcontext should inherit all filters from 
default context)

> YARN webappcontext for UI2 should inherit all filters from default context
> --
>
> Key: YARN-8258
> URL: https://issues.apache.org/jira/browse/YARN-8258
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: webapp
>Reporter: Sumana Sathish
>Assignee: Sunil Govindan
>Priority: Major
> Attachments: YARN-8258.001.patch, YARN-8258.002.patch
>
>
> Thanks [~ssath...@hortonworks.com] for finding this.
> Ideally all filters from default context has to be inherited to UI2 context 
> as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8258) [UI2] New UI webappcontext should inherit all filters from default context

2018-05-31 Thread Sunil Govindan (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil Govindan updated YARN-8258:
-
Issue Type: Bug  (was: Improvement)

> [UI2] New UI webappcontext should inherit all filters from default context
> --
>
> Key: YARN-8258
> URL: https://issues.apache.org/jira/browse/YARN-8258
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: webapp
>Reporter: Sumana Sathish
>Assignee: Sunil Govindan
>Priority: Major
> Attachments: YARN-8258.001.patch, YARN-8258.002.patch
>
>
> Thanks [~ssath...@hortonworks.com] for finding this.
> Ideally all filters from default context has to be inherited to UI2 context 
> as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8258) YARN webappcontext for UI2 should inherit all filters from default context

2018-05-31 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16496387#comment-16496387
 ] 

Sunil Govindan commented on YARN-8258:
--

Attaching patch for review. This now adds all filters from default context for 
new ui2 context.
Also it adds the same set of URLs to expose via the filters.

 

[~vinodkv] [~leftnoteasy] [~rohithsharma] pls help to review the  patch.

> YARN webappcontext for UI2 should inherit all filters from default context
> --
>
> Key: YARN-8258
> URL: https://issues.apache.org/jira/browse/YARN-8258
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: webapp
>Reporter: Sumana Sathish
>Assignee: Sunil Govindan
>Priority: Major
> Attachments: YARN-8258.001.patch, YARN-8258.002.patch
>
>
> Thanks [~ssath...@hortonworks.com] for finding this.
> Ideally all filters from default context has to be inherited to UI2 context 
> as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4781) Support intra-queue preemption for fairness ordering policy.

2018-05-28 Thread Sunil Govindan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16492540#comment-16492540
 ] 

Sunil Govindan commented on YARN-4781:
--

Hi [~eepayne]

I have committed to trunk/branch-3.1/branch-3.0. branch-2 is failing. Could u 
pls help to share branch-2 patch. Thanks.

> Support intra-queue preemption for fairness ordering policy.
> 
>
> Key: YARN-4781
> URL: https://issues.apache.org/jira/browse/YARN-4781
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler
>Reporter: Wangda Tan
>Assignee: Eric Payne
>Priority: Major
> Attachments: YARN-4781.001.patch, YARN-4781.002.patch, 
> YARN-4781.003.patch, YARN-4781.004.patch, YARN-4781.005.patch
>
>
> We introduced fairness queue policy since YARN-3319, which will let large 
> applications make progresses and not starve small applications. However, if a 
> large application takes the queue’s resources, and containers of the large 
> app has long lifespan, small applications could still wait for resources for 
> long time and SLAs cannot be guaranteed.
> Instead of wait for application release resources on their own, we need to 
> preempt resources of queue with fairness policy enabled.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8369) Javadoc build failed due to "bad use of '>'"

2018-05-28 Thread Sunil Govindan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8369?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16492567#comment-16492567
 ] 

Sunil Govindan commented on YARN-8369:
--

Thanks [~tasanuma0829].

Looks good. Will commit shortly.

> Javadoc build failed due to "bad use of '>'"
> 
>
> Key: YARN-8369
> URL: https://issues.apache.org/jira/browse/YARN-8369
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: build, docs
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Attachments: YARN-8369.1.patch
>
>
> {noformat}
> $ mvn javadoc:javadoc --projects 
> hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common
> ...
> [ERROR] 
> /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/resource/ResourceCalculator.java:263:
>  error: bad use of '>'
> [ERROR]* included) has a >0 value.
> [ERROR]  ^
> [ERROR] 
> /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/resource/ResourceCalculator.java:266:
>  error: bad use of '>'
> [ERROR]* @return returns true if any resource is >0
> [ERROR]  ^
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8384) stdout, stderr logs of a Native Service container is coming with group as nobody

2018-05-31 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16497051#comment-16497051
 ] 

Sunil Govindan commented on YARN-8384:
--

After YARN-7684, I can see below code snippet in container-executor.c
{code:java}
char *init_log_path(const char *container_log_dir, const char *logfile) {
  ..
  ..
  if (change_owner(tmp_buffer, user_detail->pw_uid, user_detail->pw_gid) != 0) {

  }
  ..
  ..
}

{code}
So ideally here the log file owner is change to the incoming user and group is 
also take from same. I am not very sure, but this seems like the pblm.

 

cc [~leftnoteasy] [~eyang]

> stdout, stderr logs of a Native Service container is coming with group as 
> nobody
> 
>
> Key: YARN-8384
> URL: https://issues.apache.org/jira/browse/YARN-8384
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-native-services
>Reporter: Sunil Govindan
>Priority: Major
>  Labels: docker
>
> When {{yarn.nodemanager.linux-container-executor.nonsecure-mode.limit-users}} 
> is set to true, and 
> {{yarn.nodemanager.linux-container-executor.nonsecure-mode.local-user}} is 
> set to nobody.
> This will cause the docker to run as nobody:nobody in yarn mode.
> The log files will be initialized as nobody:nobody:
> {noformat}
> rw-rr- 1 nobody hadoop 354 May 31 17:33 container-localizer-syslog
> rw-rr- 1 nobody hadoop 1042 May 31 17:35 directory.info
> rw-r 1 nobody hadoop 4944 May 31 17:35 launch_container.sh
> rw-rr- 1 nobody hadoop 440 May 31 17:35 prelaunch.err
> rw-rr- 1 nobody hadoop 100 May 31 17:35 prelaunch.out
> rw-r 1 nobody nobody 18733 May 31 17:37 stderr.txt
> rw-r 1 nobody nobody 400 May 31 17:35 stdout.txt
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8197) Tracking URL in the app state does not get redirected to MR ApplicationMaster for Running applications

2018-05-31 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16496884#comment-16496884
 ] 

Sunil Govindan commented on YARN-8197:
--

Somehow i missed couple of imports. I ll correct it in next patch, but will 
wait for [~vinodkv] for review.

> Tracking URL in the app state does not get redirected to MR ApplicationMaster 
> for Running applications
> --
>
> Key: YARN-8197
> URL: https://issues.apache.org/jira/browse/YARN-8197
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Reporter: Sumana Sathish
>Assignee: Sunil Govindan
>Priority: Critical
> Attachments: YARN-8197.001.patch, YARN-8197.002.patch, 
> YARN-8197.003.patch
>
>
> {code}
> org.eclipse.jetty.servlet.ServletHandler:
> javax.servlet.ServletException: Could not determine the proxy server for 
> redirection
>   at 
> org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter.findRedirectUrl(AmIpFilter.java:211)
>   at 
> org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter.doFilter(AmIpFilter.java:145)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
>   at 
> org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1617)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
>   at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
>   at org.eclipse.jetty.server.Server.handle(Server.java:534)
>   at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320)
>   at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
>   at 
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:283)
>   at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:108)
>   at 
> org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
>   at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
>   at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
>   at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)
>   at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-8384) stdout, stderr logs of a Native Service container is coming with group as nobody

2018-05-31 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16497051#comment-16497051
 ] 

Sunil Govindan edited comment on YARN-8384 at 5/31/18 7:57 PM:
---

After YARN-7684, I can see below code snippet in container-executor.c
{code:java}
char *init_log_path(const char *container_log_dir, const char *logfile) {
  ..
  ..
  if (change_owner(tmp_buffer, user_detail->pw_uid, user_detail->pw_gid) != 0) {

  }
  ..
  ..
}

{code}
So ideally here the log file owner is changed to the incoming user and group. I 
am not very sure, but this seems like the pblm.

 

cc [~leftnoteasy] [~eyang]


was (Author: sunilg):
After YARN-7684, I can see below code snippet in container-executor.c
{code:java}
char *init_log_path(const char *container_log_dir, const char *logfile) {
  ..
  ..
  if (change_owner(tmp_buffer, user_detail->pw_uid, user_detail->pw_gid) != 0) {

  }
  ..
  ..
}

{code}
So ideally here the log file owner is change to the incoming user and group is 
also take from same. I am not very sure, but this seems like the pblm.

 

cc [~leftnoteasy] [~eyang]

> stdout, stderr logs of a Native Service container is coming with group as 
> nobody
> 
>
> Key: YARN-8384
> URL: https://issues.apache.org/jira/browse/YARN-8384
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-native-services
>Reporter: Sunil Govindan
>Priority: Major
>  Labels: docker
>
> When {{yarn.nodemanager.linux-container-executor.nonsecure-mode.limit-users}} 
> is set to true, and 
> {{yarn.nodemanager.linux-container-executor.nonsecure-mode.local-user}} is 
> set to nobody.
> This will cause the docker to run as nobody:nobody in yarn mode.
> The log files will be initialized as nobody:nobody:
> {noformat}
> rw-rr- 1 nobody hadoop 354 May 31 17:33 container-localizer-syslog
> rw-rr- 1 nobody hadoop 1042 May 31 17:35 directory.info
> rw-r 1 nobody hadoop 4944 May 31 17:35 launch_container.sh
> rw-rr- 1 nobody hadoop 440 May 31 17:35 prelaunch.err
> rw-rr- 1 nobody hadoop 100 May 31 17:35 prelaunch.out
> rw-r 1 nobody nobody 18733 May 31 17:37 stderr.txt
> rw-r 1 nobody nobody 400 May 31 17:35 stdout.txt
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-8384) stdout, stderr logs of a Native Service container is coming with group as nobody

2018-05-31 Thread Sunil Govindan (JIRA)
Sunil Govindan created YARN-8384:


 Summary: stdout, stderr logs of a Native Service container is 
coming with group as nobody
 Key: YARN-8384
 URL: https://issues.apache.org/jira/browse/YARN-8384
 Project: Hadoop YARN
  Issue Type: Bug
  Components: yarn-native-services
Reporter: Sunil Govindan


# ls -l
total 48
-rw-r--r-- 1 nobody hadoop   354 May 31 17:33 container-localizer-syslog
-rw-r--r-- 1 nobody hadoop  1042 May 31 17:35 directory.info
-rw-r- 1 nobody hadoop  4944 May 31 17:35 launch_container.sh
-rw-r--r-- 1 nobody hadoop   440 May 31 17:35 prelaunch.err
-rw-r--r-- 1 nobody hadoop   100 May 31 17:35 prelaunch.out
-rw-r- 1 nobody nobody 18733 May 31 17:37 stderr.txt
-rw-r- 1 nobody nobody   400 May 31 17:35 stdout.txt



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8384) stdout, stderr logs of a Native Service container is coming with group as nobody

2018-05-31 Thread Sunil Govindan (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil Govindan updated YARN-8384:
-
Description: 
{noformat}
rw-rr- 1 nobody hadoop 354 May 31 17:33 container-localizer-syslog
rw-rr- 1 nobody hadoop 1042 May 31 17:35 directory.info
rw-r 1 nobody hadoop 4944 May 31 17:35 launch_container.sh
rw-rr- 1 nobody hadoop 440 May 31 17:35 prelaunch.err
rw-rr- 1 nobody hadoop 100 May 31 17:35 prelaunch.out
rw-r 1 nobody nobody 18733 May 31 17:37 stderr.txt
rw-r 1 nobody nobody 400 May 31 17:35 stdout.txt

{noformat}

  was:
# ls -l
total 48
-rw-r--r-- 1 nobody hadoop   354 May 31 17:33 container-localizer-syslog
-rw-r--r-- 1 nobody hadoop  1042 May 31 17:35 directory.info
-rw-r- 1 nobody hadoop  4944 May 31 17:35 launch_container.sh
-rw-r--r-- 1 nobody hadoop   440 May 31 17:35 prelaunch.err
-rw-r--r-- 1 nobody hadoop   100 May 31 17:35 prelaunch.out
-rw-r- 1 nobody nobody 18733 May 31 17:37 stderr.txt
-rw-r- 1 nobody nobody   400 May 31 17:35 stdout.txt


> stdout, stderr logs of a Native Service container is coming with group as 
> nobody
> 
>
> Key: YARN-8384
> URL: https://issues.apache.org/jira/browse/YARN-8384
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-native-services
>Reporter: Sunil Govindan
>Priority: Major
>
> {noformat}
> rw-rr- 1 nobody hadoop 354 May 31 17:33 container-localizer-syslog
> rw-rr- 1 nobody hadoop 1042 May 31 17:35 directory.info
> rw-r 1 nobody hadoop 4944 May 31 17:35 launch_container.sh
> rw-rr- 1 nobody hadoop 440 May 31 17:35 prelaunch.err
> rw-rr- 1 nobody hadoop 100 May 31 17:35 prelaunch.out
> rw-r 1 nobody nobody 18733 May 31 17:37 stderr.txt
> rw-r 1 nobody nobody 400 May 31 17:35 stdout.txt
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8258) YARN webappcontext for UI2 should inherit all filters from default context

2018-05-31 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16497071#comment-16497071
 ] 

Sunil Govindan commented on YARN-8258:
--

Yes [~vinodkv]

UI2 was missing the filters when compared to what is added in to default 
context. {{httpServer.getWebAppContext().getServletHandler()}} was providing 
all filterHolding and filterMappings with {{getFilters}} and 
{{getFilterMappings}} apis. To define filter for Ui2, we have to iterate 
through the list of filterHolder available via {{getFilters}} api and call 
{{HttpServer2.defineFilter}}. While doing this, {{getFilterMappings}} helps to 
get the URL path associated with each filter name and UI2 also should use same 
except for *authentication* filter. In that case, UI2 has to add /*.

With this change, if a custom filter is added such as AuthenticationFIlter or 
JWTAuthHandler, UI2 context will have the servlet details and with correct 
path. 

> YARN webappcontext for UI2 should inherit all filters from default context
> --
>
> Key: YARN-8258
> URL: https://issues.apache.org/jira/browse/YARN-8258
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: webapp
>Reporter: Sumana Sathish
>Assignee: Sunil Govindan
>Priority: Major
> Attachments: YARN-8258.001.patch, YARN-8258.002.patch, 
> YARN-8258.003.patch
>
>
> Thanks [~ssath...@hortonworks.com] for finding this.
> Ideally all filters from default context has to be inherited to UI2 context 
> as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8197) Tracking URL in the app state does not get redirected to MR ApplicationMaster for Running applications

2018-05-31 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16497074#comment-16497074
 ] 

Sunil Govindan commented on YARN-8197:
--

Thanks [~vinodkv] Updating new patch after fixing checkstyle

> Tracking URL in the app state does not get redirected to MR ApplicationMaster 
> for Running applications
> --
>
> Key: YARN-8197
> URL: https://issues.apache.org/jira/browse/YARN-8197
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Reporter: Sumana Sathish
>Assignee: Sunil Govindan
>Priority: Critical
> Attachments: YARN-8197.001.patch, YARN-8197.002.patch, 
> YARN-8197.003.patch, YARN-8197.004.patch
>
>
> {code}
> org.eclipse.jetty.servlet.ServletHandler:
> javax.servlet.ServletException: Could not determine the proxy server for 
> redirection
>   at 
> org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter.findRedirectUrl(AmIpFilter.java:211)
>   at 
> org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter.doFilter(AmIpFilter.java:145)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
>   at 
> org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1617)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
>   at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
>   at org.eclipse.jetty.server.Server.handle(Server.java:534)
>   at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320)
>   at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
>   at 
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:283)
>   at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:108)
>   at 
> org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
>   at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
>   at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
>   at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)
>   at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8197) Tracking URL in the app state does not get redirected to MR ApplicationMaster for Running applications

2018-05-31 Thread Sunil Govindan (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8197?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil Govindan updated YARN-8197:
-
Attachment: YARN-8197.004.patch

> Tracking URL in the app state does not get redirected to MR ApplicationMaster 
> for Running applications
> --
>
> Key: YARN-8197
> URL: https://issues.apache.org/jira/browse/YARN-8197
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Reporter: Sumana Sathish
>Assignee: Sunil Govindan
>Priority: Critical
> Attachments: YARN-8197.001.patch, YARN-8197.002.patch, 
> YARN-8197.003.patch, YARN-8197.004.patch
>
>
> {code}
> org.eclipse.jetty.servlet.ServletHandler:
> javax.servlet.ServletException: Could not determine the proxy server for 
> redirection
>   at 
> org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter.findRedirectUrl(AmIpFilter.java:211)
>   at 
> org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter.doFilter(AmIpFilter.java:145)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
>   at 
> org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1617)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
>   at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
>   at org.eclipse.jetty.server.Server.handle(Server.java:534)
>   at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320)
>   at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
>   at 
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:283)
>   at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:108)
>   at 
> org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
>   at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
>   at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
>   at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)
>   at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8276) [UI2] After version field became mandatory, form-based submission of new YARN service doesn't work

2018-06-03 Thread Sunil Govindan (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8276?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil Govindan updated YARN-8276:
-
Summary: [UI2] After version field became mandatory, form-based submission 
of new YARN service doesn't work  (was: [UI2] After version field became 
mandatory, form-based submission of new YARN service through UI2 doesn't work)

> [UI2] After version field became mandatory, form-based submission of new YARN 
> service doesn't work
> --
>
> Key: YARN-8276
> URL: https://issues.apache.org/jira/browse/YARN-8276
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Reporter: Gergely Novák
>Assignee: Gergely Novák
>Priority: Critical
> Attachments: YARN-8276.001.patch
>
>
> After version became mandatory in YARN service, one cannot create a new 
> service through UI, there is no way to specify the version field and the 
> service fails with the following message:
> {code}
> "Error: Adapter operation failed". 
> {code}
> Checking through browser dev tools, the REST response is the following:
> {code}
> {"diagnostics":"Version of service sleeper-service is either empty or not 
> provided"}
> {code}
> Discovered by [~vinodkv].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8258) YARN webappcontext for UI2 should inherit all filters from default context

2018-06-04 Thread Sunil Govindan (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil Govindan updated YARN-8258:
-
Attachment: YARN-8258.004.patch

> YARN webappcontext for UI2 should inherit all filters from default context
> --
>
> Key: YARN-8258
> URL: https://issues.apache.org/jira/browse/YARN-8258
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: webapp
>Reporter: Sumana Sathish
>Assignee: Sunil Govindan
>Priority: Major
> Attachments: YARN-8258.001.patch, YARN-8258.002.patch, 
> YARN-8258.003.patch, YARN-8258.004.patch
>
>
> Thanks [~ssath...@hortonworks.com] for finding this.
> Ideally all filters from default context has to be inherited to UI2 context 
> as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8258) YARN webappcontext for UI2 should inherit all filters from default context

2018-06-04 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16500748#comment-16500748
 ] 

Sunil Govindan commented on YARN-8258:
--

Updating v4 patch with test case to check filter ordering as well.

{{httpServer.getWebAppContext().getServletHandler()}} was providing all 
filterHolding and filterMappings. UI2 is copying this context from default 
context. For Spnego, path spec has to be null to ensure that Spnego filter is 
coming after kerberos authentication filter. 

[~vinodkv] Could you  please help to check this.

> YARN webappcontext for UI2 should inherit all filters from default context
> --
>
> Key: YARN-8258
> URL: https://issues.apache.org/jira/browse/YARN-8258
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: webapp
>Reporter: Sumana Sathish
>Assignee: Sunil Govindan
>Priority: Major
> Attachments: YARN-8258.001.patch, YARN-8258.002.patch, 
> YARN-8258.003.patch, YARN-8258.004.patch
>
>
> Thanks [~ssath...@hortonworks.com] for finding this.
> Ideally all filters from default context has to be inherited to UI2 context 
> as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8220) Running Tensorflow on YARN with GPU and Docker - Examples

2018-06-01 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16497708#comment-16497708
 ] 

Sunil Govindan commented on YARN-8220:
--

Attaching v1 patch. This patch majorly covers all scripts/examples/docker file 
etc which will help to run Tensorflow on YARN (Distributed/Standalone).

Thank you very much [~leftnoteasy] for helping out to integrate TF in YARN with 
GPU/Docker.

 

Details of this work:
 # Script to auto-generate native service spec file for Tensorflow jobs which 
will auto submit service to YARN. This will help to run TF jobs on YARN without 
any complexity. Detailed example is available in the doc.
 # Support to run latest Tensorflow 1.8 and CUDA 9  on YARN.
 # Distributed Tensorflow support. User could simply run this by providing 
{{--distributed}} option the script and multiple *worker* could run in 
different nodes and could leverage the resources in YARN.
 # Dockerfile is provided for various cases (GPU/CPU, Different Tensorflow 
versions) etc.
 # Various tests are done based on TF version / GPU etc and results are 
published as part of the document in the patch.

Example:
{code:java}
python submit_tf_job.py --remote_conf_path hdfs:///tf-job-conf --input_spec 
example_tf_job_spec.json --docker_image gpu.cuda_9.0.tf_1.8.0 --job_name 
distributed-tf-gpu --user tf-user --domain tensorflow.site --distributed 
--kerberos
{code}
cc [~vinodkv] [~rohithsharma]

> Running Tensorflow on YARN with GPU and Docker - Examples
> -
>
> Key: YARN-8220
> URL: https://issues.apache.org/jira/browse/YARN-8220
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-native-services
>Reporter: Sunil Govindan
>Assignee: Sunil Govindan
>Priority: Critical
> Attachments: YARN-8220.001.patch
>
>
> Tensorflow could be run on YARN and could leverage YARN's distributed 
> features.
> This spec fill will help to run Tensorflow on yarn with GPU/docker



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8220) Running Tensorflow on YARN with GPU and Docker - Examples

2018-06-01 Thread Sunil Govindan (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil Govindan updated YARN-8220:
-
Summary: Running Tensorflow on YARN with GPU and Docker - Examples  (was: 
Tensorflow yarn spec file to add to native service examples)

> Running Tensorflow on YARN with GPU and Docker - Examples
> -
>
> Key: YARN-8220
> URL: https://issues.apache.org/jira/browse/YARN-8220
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-native-services
>Reporter: Sunil Govindan
>Assignee: Sunil Govindan
>Priority: Critical
>
> Tensorflow could be run on YARN and could leverage YARN's distributed 
> features.
> This spec fill will help to run Tensorflow on yarn with GPU/docker



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8220) Running Tensorflow on YARN with GPU and Docker - Examples

2018-06-01 Thread Sunil Govindan (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil Govindan updated YARN-8220:
-
Attachment: YARN-8220.001.patch

> Running Tensorflow on YARN with GPU and Docker - Examples
> -
>
> Key: YARN-8220
> URL: https://issues.apache.org/jira/browse/YARN-8220
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-native-services
>Reporter: Sunil Govindan
>Assignee: Sunil Govindan
>Priority: Critical
> Attachments: YARN-8220.001.patch
>
>
> Tensorflow could be run on YARN and could leverage YARN's distributed 
> features.
> This spec fill will help to run Tensorflow on yarn with GPU/docker



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8220) Running Tensorflow on YARN with GPU and Docker - Examples

2018-06-01 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16498882#comment-16498882
 ] 

Sunil Govindan commented on YARN-8220:
--

Hi [~eyang]

One quick doubt

bq.ENTRYPOINT, and CMD in Dockerfile

This means that the ENTRYPOINT and other CMDs are to be specified in 
Dockerfile. This means we need different Dockerfiles to run different TF 
workload which may inconvenient, correct? We could have changed jobs in 
Yarnfile itself. Pls correct me if I am wrong.

> Running Tensorflow on YARN with GPU and Docker - Examples
> -
>
> Key: YARN-8220
> URL: https://issues.apache.org/jira/browse/YARN-8220
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-native-services
>Reporter: Sunil Govindan
>Assignee: Sunil Govindan
>Priority: Critical
> Attachments: YARN-8220.001.patch
>
>
> Tensorflow could be run on YARN and could leverage YARN's distributed 
> features.
> This spec fill will help to run Tensorflow on yarn with GPU/docker



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8319) More YARN pages need to honor yarn.resourcemanager.display.per-user-apps

2018-06-01 Thread Sunil Govindan (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil Govindan updated YARN-8319:
-
Attachment: YARN-8319.addendum.001.patch

> More YARN pages need to honor yarn.resourcemanager.display.per-user-apps
> 
>
> Key: YARN-8319
> URL: https://issues.apache.org/jira/browse/YARN-8319
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: webapp
>Reporter: Vinod Kumar Vavilapalli
>Assignee: Sunil Govindan
>Priority: Major
> Fix For: 3.2.0, 3.1.1
>
> Attachments: YARN-8319.001.patch, YARN-8319.002.patch, 
> YARN-8319.003.patch, YARN-8319.addendum.001.patch
>
>
> When this config is on
>  - Per queue page on UI2 should filter app list by user
>  -- TODO: Verify the same with UI1 Per-queue page
>  - ATSv2 with UI2 should filter list of all users' flows and flow activities
>  - Per Node pages
>  -- Listing of apps and containers on a per-node basis should filter apps and 
> containers by user.
> To this end, because this is no longer just for resourcemanager, we should 
> also deprecate {{yarn.resourcemanager.display.per-user-apps}} in favor of 
> {{yarn.webapp.filter-app-list-by-user}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Reopened] (YARN-8319) More YARN pages need to honor yarn.resourcemanager.display.per-user-apps

2018-06-01 Thread Sunil Govindan (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil Govindan reopened YARN-8319:
--

Reopening this Jira. For ATSv2, currently only app owner can see the flows 
list. Even  yarn admin could only see its own flows. YARN admin should ideally 
to have visibility of all flows. However ATSv2 ACL story is under progress, 
hence to complete the user filtering story, an interim handling is needed.

> More YARN pages need to honor yarn.resourcemanager.display.per-user-apps
> 
>
> Key: YARN-8319
> URL: https://issues.apache.org/jira/browse/YARN-8319
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: webapp
>Reporter: Vinod Kumar Vavilapalli
>Assignee: Sunil Govindan
>Priority: Major
> Fix For: 3.2.0, 3.1.1
>
> Attachments: YARN-8319.001.patch, YARN-8319.002.patch, 
> YARN-8319.003.patch
>
>
> When this config is on
>  - Per queue page on UI2 should filter app list by user
>  -- TODO: Verify the same with UI1 Per-queue page
>  - ATSv2 with UI2 should filter list of all users' flows and flow activities
>  - Per Node pages
>  -- Listing of apps and containers on a per-node basis should filter apps and 
> containers by user.
> To this end, because this is no longer just for resourcemanager, we should 
> also deprecate {{yarn.resourcemanager.display.per-user-apps}} in favor of 
> {{yarn.webapp.filter-app-list-by-user}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8258) YARN webappcontext for UI2 should inherit all filters from default context

2018-06-04 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16501219#comment-16501219
 ] 

Sunil Govindan commented on YARN-8258:
--

Cleaned up the test case to another class to cover all initializers when Spnego 
is also used.

Fixed checkstyles as well. Attached new patch.

cc [~vinodkv]

> YARN webappcontext for UI2 should inherit all filters from default context
> --
>
> Key: YARN-8258
> URL: https://issues.apache.org/jira/browse/YARN-8258
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: webapp
>Reporter: Sumana Sathish
>Assignee: Sunil Govindan
>Priority: Major
> Attachments: YARN-8258.001.patch, YARN-8258.002.patch, 
> YARN-8258.003.patch, YARN-8258.004.patch, YARN-8258.005.patch
>
>
> Thanks [~ssath...@hortonworks.com] for finding this.
> Ideally all filters from default context has to be inherited to UI2 context 
> as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8258) YARN webappcontext for UI2 should inherit all filters from default context

2018-06-04 Thread Sunil Govindan (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil Govindan updated YARN-8258:
-
Attachment: YARN-8258.005.patch

> YARN webappcontext for UI2 should inherit all filters from default context
> --
>
> Key: YARN-8258
> URL: https://issues.apache.org/jira/browse/YARN-8258
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: webapp
>Reporter: Sumana Sathish
>Assignee: Sunil Govindan
>Priority: Major
> Attachments: YARN-8258.001.patch, YARN-8258.002.patch, 
> YARN-8258.003.patch, YARN-8258.004.patch, YARN-8258.005.patch
>
>
> Thanks [~ssath...@hortonworks.com] for finding this.
> Ideally all filters from default context has to be inherited to UI2 context 
> as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8258) YARN webappcontext for UI2 should inherit all filters from default context

2018-06-05 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16501336#comment-16501336
 ] 

Sunil Govindan commented on YARN-8258:
--

Thank you very much [~vinodkv]

Updating new patch addressing all comments.

> YARN webappcontext for UI2 should inherit all filters from default context
> --
>
> Key: YARN-8258
> URL: https://issues.apache.org/jira/browse/YARN-8258
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: webapp
>Reporter: Sumana Sathish
>Assignee: Sunil Govindan
>Priority: Major
> Attachments: YARN-8258.001.patch, YARN-8258.002.patch, 
> YARN-8258.003.patch, YARN-8258.004.patch, YARN-8258.005.patch, 
> YARN-8258.006.patch
>
>
> Thanks [~ssath...@hortonworks.com] for finding this.
> Ideally all filters from default context has to be inherited to UI2 context 
> as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8258) YARN webappcontext for UI2 should inherit all filters from default context

2018-06-05 Thread Sunil Govindan (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil Govindan updated YARN-8258:
-
Attachment: YARN-8258.006.patch

> YARN webappcontext for UI2 should inherit all filters from default context
> --
>
> Key: YARN-8258
> URL: https://issues.apache.org/jira/browse/YARN-8258
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: webapp
>Reporter: Sumana Sathish
>Assignee: Sunil Govindan
>Priority: Major
> Attachments: YARN-8258.001.patch, YARN-8258.002.patch, 
> YARN-8258.003.patch, YARN-8258.004.patch, YARN-8258.005.patch, 
> YARN-8258.006.patch
>
>
> Thanks [~ssath...@hortonworks.com] for finding this.
> Ideally all filters from default context has to be inherited to UI2 context 
> as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-8396) Click on an individual container continuously spins and doesn't load the page

2018-06-05 Thread Sunil Govindan (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8396?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil Govindan reassigned YARN-8396:


Assignee: Sunil Govindan

> Click on an individual container continuously spins and doesn't load the page
> -
>
> Key: YARN-8396
> URL: https://issues.apache.org/jira/browse/YARN-8396
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Charan Hebri
>Assignee: Sunil Govindan
>Priority: Blocker
> Attachments: Screen Shot 2018-05-31 at 3.24.09 PM.png
>
>
> For a running application, a click on an individual container leads to an 
> infinite spinner which doesn't load the corresponding page. To reproduce, 
> with a running application click:
> Nodes -> \{Node_HTTP_Address} -> List of Containers on this Node -> 
> \{Container_id}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8396) Click on an individual container continuously spins and doesn't load the page

2018-06-05 Thread Sunil Govindan (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8396?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil Govindan updated YARN-8396:
-
Attachment: YARN-8396.001.patch

> Click on an individual container continuously spins and doesn't load the page
> -
>
> Key: YARN-8396
> URL: https://issues.apache.org/jira/browse/YARN-8396
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Charan Hebri
>Assignee: Sunil Govindan
>Priority: Blocker
> Attachments: Screen Shot 2018-05-31 at 3.24.09 PM.png, 
> YARN-8396.001.patch
>
>
> For a running application, a click on an individual container leads to an 
> infinite spinner which doesn't load the corresponding page. To reproduce, 
> with a running application click:
> Nodes -> \{Node_HTTP_Address} -> List of Containers on this Node -> 
> \{Container_id}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8258) YARN webappcontext for UI2 should inherit all filters from default context

2018-06-05 Thread Sunil Govindan (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil Govindan updated YARN-8258:
-
Attachment: YARN-8258.007.patch

> YARN webappcontext for UI2 should inherit all filters from default context
> --
>
> Key: YARN-8258
> URL: https://issues.apache.org/jira/browse/YARN-8258
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: webapp
>Reporter: Sumana Sathish
>Assignee: Sunil Govindan
>Priority: Major
> Attachments: YARN-8258.001.patch, YARN-8258.002.patch, 
> YARN-8258.003.patch, YARN-8258.004.patch, YARN-8258.005.patch, 
> YARN-8258.006.patch, YARN-8258.007.patch
>
>
> Thanks [~ssath...@hortonworks.com] for finding this.
> Ideally all filters from default context has to be inherited to UI2 context 
> as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8397) ActivitiesManager thread doesn't handles InterruptedException

2018-06-05 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16501572#comment-16501572
 ] 

Sunil Govindan commented on YARN-8397:
--

cc [~leftnoteasy] 

Thanks [~rohithsharma]. You analysis is correct. We have to break the loop.

> ActivitiesManager thread doesn't handles InterruptedException 
> --
>
> Key: YARN-8397
> URL: https://issues.apache.org/jira/browse/YARN-8397
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Rohith Sharma K S
>Priority: Major
>
> It is observed while using MiniYARNCluster, MiniYARNCluster#stop doesn't stop 
> JVM. 
> Thread dump shows that ActivitiesManager is in timed_waiting state. 
> {code}
> "Thread-43" #66 prio=5 os_prio=31 tid=0x7ffea09fd000 nid=0xa103 waiting 
> on condition [0x76f1]
>java.lang.Thread.State: TIMED_WAITING (sleeping)
>   at java.lang.Thread.sleep(Native Method)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.activities.ActivitiesManager$1.run(ActivitiesManager.java:142)
>   at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8258) YARN webappcontext for UI2 should inherit all filters from default context

2018-06-05 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16501556#comment-16501556
 ] 

Sunil Govindan commented on YARN-8258:
--

Uploading v7 patch.

For UI2 context, all filterMappings paths need not to be copied. For example, 
default context has pathSpec like /cluster/*, /logs/* etc. These are not valid 
for UI2. For UI2, the context is defined as "/ui2". Hence pathSpec as /* will 
cover all the  necessary sub paths under /ui2 and will apply any filter as 
needed.

But certain filters like Spnego doesnt have any pathSpec added for default 
context. This is to keep the filter order correct and ensure that the if any 
alternative auth handlers are there (like JWT handler for SSO), those filters 
has to come before Spnego filter to process special tokens.

Hence in UI2 also, if any filter has NULL or empty pathSpec, same will be 
retained.

Latest patch is as per this understanding. cc /[~vinodkv] pls help to check the 
latest patch. Thank You.

> YARN webappcontext for UI2 should inherit all filters from default context
> --
>
> Key: YARN-8258
> URL: https://issues.apache.org/jira/browse/YARN-8258
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: webapp
>Reporter: Sumana Sathish
>Assignee: Sunil Govindan
>Priority: Major
> Attachments: YARN-8258.001.patch, YARN-8258.002.patch, 
> YARN-8258.003.patch, YARN-8258.004.patch, YARN-8258.005.patch, 
> YARN-8258.006.patch, YARN-8258.007.patch
>
>
> Thanks [~ssath...@hortonworks.com] for finding this.
> Ideally all filters from default context has to be inherited to UI2 context 
> as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8396) Click on an individual container continuously spins and doesn't load the page

2018-06-05 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16501567#comment-16501567
 ] 

Sunil Govindan commented on YARN-8396:
--

Straight forward fix. Address was undefined while accessing yarn-nm-gpu end 
point. 

[~rohithsharma] Could you please help to review this patch.

 

> Click on an individual container continuously spins and doesn't load the page
> -
>
> Key: YARN-8396
> URL: https://issues.apache.org/jira/browse/YARN-8396
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Charan Hebri
>Assignee: Sunil Govindan
>Priority: Blocker
> Attachments: Screen Shot 2018-05-31 at 3.24.09 PM.png, 
> YARN-8396.001.patch
>
>
> For a running application, a click on an individual container leads to an 
> infinite spinner which doesn't load the corresponding page. To reproduce, 
> with a running application click:
> Nodes -> \{Node_HTTP_Address} -> List of Containers on this Node -> 
> \{Container_id}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-8258) YARN webappcontext for UI2 should inherit all filters from default context

2018-06-05 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16501793#comment-16501793
 ] 

Sunil Govindan edited comment on YARN-8258 at 6/5/18 1:42 PM:
--

Findbugs warning is not related to patch. Rather it is existing in the trunk. 
Raised YARN-8398 to track this.

Test case failure is also not related.

 


was (Author: sunilg):
Find bugs warning is not related to patch. Rather it is existing the trunk. 
Raised YARN-8398 to track this.

Test case failure is also not related.

 

> YARN webappcontext for UI2 should inherit all filters from default context
> --
>
> Key: YARN-8258
> URL: https://issues.apache.org/jira/browse/YARN-8258
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: webapp
>Reporter: Sumana Sathish
>Assignee: Sunil Govindan
>Priority: Major
> Attachments: YARN-8258.001.patch, YARN-8258.002.patch, 
> YARN-8258.003.patch, YARN-8258.004.patch, YARN-8258.005.patch, 
> YARN-8258.006.patch, YARN-8258.007.patch
>
>
> Thanks [~ssath...@hortonworks.com] for finding this.
> Ideally all filters from default context has to be inherited to UI2 context 
> as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8258) YARN webappcontext for UI2 should inherit all filters from default context

2018-06-05 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16501793#comment-16501793
 ] 

Sunil Govindan commented on YARN-8258:
--

Find bugs warning is not related to patch. Rather it is existing the trunk. 
Raised YARN-8398 to track this.

Test case failure is also not related.

 

> YARN webappcontext for UI2 should inherit all filters from default context
> --
>
> Key: YARN-8258
> URL: https://issues.apache.org/jira/browse/YARN-8258
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: webapp
>Reporter: Sumana Sathish
>Assignee: Sunil Govindan
>Priority: Major
> Attachments: YARN-8258.001.patch, YARN-8258.002.patch, 
> YARN-8258.003.patch, YARN-8258.004.patch, YARN-8258.005.patch, 
> YARN-8258.006.patch, YARN-8258.007.patch
>
>
> Thanks [~ssath...@hortonworks.com] for finding this.
> Ideally all filters from default context has to be inherited to UI2 context 
> as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-8398) Findbugs warning IS2_INCONSISTENT_SYNC in AllocationFileLoaderService.reloadListener

2018-06-05 Thread Sunil Govindan (JIRA)
Sunil Govindan created YARN-8398:


 Summary: Findbugs warning IS2_INCONSISTENT_SYNC in 
AllocationFileLoaderService.reloadListener
 Key: YARN-8398
 URL: https://issues.apache.org/jira/browse/YARN-8398
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager
Reporter: Sunil Govindan


{code:java}
Inconsistent synchronization of 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.AllocationFileLoaderService.reloadListener;
 locked 75% of time Bug type IS2_INCONSISTENT_SYNC (click for details)  In 
class 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.AllocationFileLoaderService
 Field 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.AllocationFileLoaderService.reloadListener
 Synchronized 75% of the time Unsynchronized access at 
AllocationFileLoaderService.java:[line 117] Synchronized access at 
AllocationFileLoaderService.java:[line 212] Synchronized access at 
AllocationFileLoaderService.java:[line 228] Synchronized access at 
AllocationFileLoaderService.java:[line 269]{code}
 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8258) YARN webappcontext for UI2 should inherit all filters from default context

2018-06-05 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502232#comment-16502232
 ] 

Sunil Govindan commented on YARN-8258:
--

{{org.apache.hadoop.security.authentication.server.JWTRedirectAuthenticationHandler}}
 is configured as http authenticate type. Hence this filter will be present as 
AuthenticationFilter. 
{code:java}
filterHolder.getName()=authentication
filterHolder.getClassName()=org.apache.hadoop.security.authentication.server.AuthenticationFilter,
type=org.apache.hadoop.security.authentication.server.JWTRedirectAuthenticationHandler{code}
 
If Security is enabled, Spnego filter will be loaded as well.
Now i am quoting code from AuthenticationFilter.
{code:java}
public void doFilter(ServletRequest request,  ServletResponse response, 
FilterChain filterChain) throws IOException, ServletException {

    ...
...
    try {
      boolean newToken = false;
      AuthenticationToken token;
      try {
        token = getToken(httpRequest);
        
      }
      catch (AuthenticationException ex) {
        ...
      }

      if (authHandler.managementOperation(token, httpRequest, httpResponse)) {
        if (token == null) {
          token = authHandler.authenticate(httpRequest, httpResponse);
          if (token != null && token != AuthenticationToken.ANONYMOUS) {
            if (token.getMaxInactives() > 0) {
              token.setMaxInactives(System.currentTimeMillis()
                  + getMaxInactiveInterval() * 1000);
            }{code}
When Auth Handler gets invoked this snippet code for doFilter, *authHandler* 
will be JWTRedirectAuthenticationHandler instead of KerberosAuth handler. This 
will be process JWT cookie and create a token.
 
Now quoting last part of doFilter code.
{code:java}
        if (token != null) {
          
          
          final AuthenticationToken authToken = token;
          httpRequest = new HttpServletRequestWrapper(httpRequest) {
            @Override
            public String getAuthType() {
              return authToken.getType();
            }
            @Override
            public String getRemoteUser() {
              return authToken.getUserName();
            }
            @Override
            public Principal getUserPrincipal() {
             return (authToken != AuthenticationToken.ANONYMOUS) ?
                  authToken : null;
            }
          };

...
...
          doFilter(filterChain, httpRequest, httpResponse);
        }{code}
 
This token is populated from JWT handler and proper then httpRequest is created 
with this. And then passed to further filters in the chain.
Hence even if Spnego comes later, this wont be a pblm. Infact this code is 
present in from long time and works well with KNOX SSO and UI1. This Jira 
extends same to UI2. 

> YARN webappcontext for UI2 should inherit all filters from default context
> --
>
> Key: YARN-8258
> URL: https://issues.apache.org/jira/browse/YARN-8258
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: webapp
>Reporter: Sumana Sathish
>Assignee: Sunil Govindan
>Priority: Major
> Attachments: YARN-8258.001.patch, YARN-8258.002.patch, 
> YARN-8258.003.patch, YARN-8258.004.patch, YARN-8258.005.patch, 
> YARN-8258.006.patch, YARN-8258.007.patch
>
>
> Thanks [~ssath...@hortonworks.com] for finding this.
> Ideally all filters from default context has to be inherited to UI2 context 
> as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-8399) NodeManager is giving 403 GSS exception post upgrade to 3.1 in secure mode

2018-06-05 Thread Sunil Govindan (JIRA)
Sunil Govindan created YARN-8399:


 Summary: NodeManager is giving 403 GSS exception post upgrade to 
3.1 in secure mode
 Key: YARN-8399
 URL: https://issues.apache.org/jira/browse/YARN-8399
 Project: Hadoop YARN
  Issue Type: Bug
  Components: timelineservice
Reporter: Sunil Govindan
Assignee: Sunil Govindan


Getting 403 GSS exception while accessing NM http port via curl. 
{code:java}
curl -k -i --negotiate -u: https://:/node
HTTP/1.1 401 Authentication required
Date: Tue, 05 Jun 2018 17:59:00 GMT
Date: Tue, 05 Jun 2018 17:59:00 GMT
Pragma: no-cache
WWW-Authenticate: Negotiate
Set-Cookie: hadoop.auth=; Path=/; Secure; HttpOnly
Cache-Control: must-revalidate,no-cache,no-store
Content-Type: text/html;charset=iso-8859-1
Content-Length: 264

HTTP/1.1 403 GSSException: Failure unspecified at GSS-API level (Mechanism 
level: Request is a replay (34)){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8399) NodeManager is giving 403 GSS exception post upgrade to 3.1 in secure mode

2018-06-05 Thread Sunil Govindan (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8399?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil Govindan updated YARN-8399:
-
Attachment: YARN-8399.001.patch

> NodeManager is giving 403 GSS exception post upgrade to 3.1 in secure mode
> --
>
> Key: YARN-8399
> URL: https://issues.apache.org/jira/browse/YARN-8399
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: timelineservice
>Reporter: Sunil Govindan
>Assignee: Sunil Govindan
>Priority: Major
> Attachments: YARN-8399.001.patch
>
>
> Getting 403 GSS exception while accessing NM http port via curl. 
> {code:java}
> curl -k -i --negotiate -u: https://:/node
> HTTP/1.1 401 Authentication required
> Date: Tue, 05 Jun 2018 17:59:00 GMT
> Date: Tue, 05 Jun 2018 17:59:00 GMT
> Pragma: no-cache
> WWW-Authenticate: Negotiate
> Set-Cookie: hadoop.auth=; Path=/; Secure; HttpOnly
> Cache-Control: must-revalidate,no-cache,no-store
> Content-Type: text/html;charset=iso-8859-1
> Content-Length: 264
> HTTP/1.1 403 GSSException: Failure unspecified at GSS-API level (Mechanism 
> level: Request is a replay (34)){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



  1   2   3   4   5   6   7   8   9   10   >