[jira] [Commented] (YARN-10080) Support show app id on localizer thread pool

2020-01-08 Thread Abhishek Modi (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10080?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17011518#comment-17011518
 ] 

Abhishek Modi commented on YARN-10080:
--

Thanks [~cane] for working on this. Changes looks good to me. I will wait for 
jenkins result.

> Support show app id on localizer thread pool
> 
>
> Key: YARN-10080
> URL: https://issues.apache.org/jira/browse/YARN-10080
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Reporter: zhoukang
>Assignee: zhoukang
>Priority: Major
> Attachments: YARN-10080-001.patch
>
>
> Currently when we are troubleshooting a container localizer issue, if we want 
> to analyze the jstack with thread detail, we can not figure out which thread 
> is processing the given container. So i want to add app id on the thread name



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9538) Document scheduler/app activities and REST APIs

2020-01-08 Thread Tao Yang (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-9538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17011505#comment-17011505
 ] 

Tao Yang commented on YARN-9538:


Thanks [~cheersyang] for finding out mistakes and providing better 
descriptions, I'll fix them as soon as possible.

> Document scheduler/app activities and REST APIs
> ---
>
> Key: YARN-9538
> URL: https://issues.apache.org/jira/browse/YARN-9538
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Tao Yang
>Assignee: Tao Yang
>Priority: Major
> Attachments: YARN-9538.001.patch, YARN-9538.002.patch
>
>
> Add documentation for scheduler/app activities in CapacityScheduler.md and 
> ResourceManagerRest.md.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9538) Document scheduler/app activities and REST APIs

2020-01-08 Thread Weiwei Yang (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-9538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17011477#comment-17011477
 ] 

Weiwei Yang commented on YARN-9538:
---

Hi [~Tao Yang]

Few comments

CS
 # The newly added document should be  added to the table of content of the page
 # "Activities have been integrated into the application attempt page, should 
be shown below the table of outstanding requests when there is any outstanding 
request" -> "Activities info is available in the application attempt page on RM 
Web UI, where outstanding requests are aggregated and displayed.

 

RM

 

1. +The scheduler activities API currently supports Capacity Scheduler and 
provides a way to get scheduler activities in a single scheduling process, it 
will trigger recording scheduler activities in next scheduling process and then 
take last required scheduler activities from cache as the response. The 
response have hierarchical structure with multiple levels and important 
scheduling details which are organized by the sequence of scheduling process:

-> 

The scheduler activities Restful API can fetch scheduler activities info 
recorded in a scheduling cycle. The API returns a message that includes 
important scheduling activities info.

 

2. nodeId - specified node ID, if not specified, scheduler will record next 
scheduling process on any node.

->

specified node ID, if not specified, the scheduler will record the scheduling 
activities info for the next scheduling cycle on all nodes

 

+### Elements of the *Activities* object
+
+| Item | Data Type | Description |
+|: |: |: |
+| nodeId | string | The node ID on which scheduler tries to schedule 
containers. |
+| timestamp | long | Timestamp of the activities. |
+| dateTime | string | Date time of the activities. |
+| diagnostic | string | Top diagnostic of the activities about empty results, 
unavailable environments, or illegal input parameters, such as "waiting for 
display", "waiting for the next allocation", "Not Capacity Scheduler", "No node 
manager running in the cluster", "Got invalid groupBy: xx, valid groupBy types: 
DIAGNOSTICS" |
+| allocations | array of allocations | A collection of allocation objects. |
+

 

3. +| nodeId | string | The node ID on which scheduler tries to schedule 
containers. |

->

The node ID on which the scheduler tries to allocate containers.

 

4. +| diagnostic | string | Top diagnostic of the activities about empty 
results, unavailable environments, or illegal input parameters, such as 
"waiting for display", "waiting for the next allocation", "Not Capacity 
Scheduler", "No node manager running in the cluster", "Got invalid groupBy: xx, 
valid groupBy types: DIAGNOSTICS" |

Please remove "Not Capacity Scheduler".

 

5. Please replace all "ids" to "IDs"

 

6. four node activities will be separated into two groups

->

4 node activities info will be grouped into 2 groups.

 

7. + Application activities include useful scheduling info for a specified 
application, the response have hierarchical structure with multiple levels:

->

the response has a hierarchical layout with following fields:

 

8. * **AppActivities** - AppActivities are root structure of application 
activities within basic information.

->

is the root element?


9. +* **Applications** - Allocations are allocation attempts at app level 
queried from the cache.
->

shouldn't here be applications?

 

> Document scheduler/app activities and REST APIs
> ---
>
> Key: YARN-9538
> URL: https://issues.apache.org/jira/browse/YARN-9538
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Tao Yang
>Assignee: Tao Yang
>Priority: Major
> Attachments: YARN-9538.001.patch, YARN-9538.002.patch
>
>
> Add documentation for scheduler/app activities in CapacityScheduler.md and 
> ResourceManagerRest.md.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4946) RM should not consider an application as COMPLETED when log aggregation is not in a terminal state

2020-01-08 Thread Steven Rand (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-4946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17011446#comment-17011446
 ] 

Steven Rand commented on YARN-4946:
---

Any update on what we want to do here? It seems like we're starting to plan new 
releases, and I think it'd be good to either revert or make some adjustment 
before those come out.

> RM should not consider an application as COMPLETED when log aggregation is 
> not in a terminal state
> --
>
> Key: YARN-4946
> URL: https://issues.apache.org/jira/browse/YARN-4946
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: log-aggregation
>Affects Versions: 2.8.0
>Reporter: Robert Kanter
>Assignee: Szilard Nemeth
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: YARN-4946.001.patch, YARN-4946.002.patch, 
> YARN-4946.003.patch, YARN-4946.004.patch
>
>
> MAPREDUCE-6415 added a tool that combines the aggregated log files for each 
> Yarn App into a HAR file.  When run, it seeds the list by looking at the 
> aggregated logs directory, and then filters out ineligible apps.  One of the 
> criteria involves checking with the RM that an Application's log aggregation 
> status is not still running and has not failed.  When the RM "forgets" about 
> an older completed Application (e.g. RM failover, enough time has passed, 
> etc), the tool won't find the Application in the RM and will just assume that 
> its log aggregation succeeded, even if it actually failed or is still running.
> We can solve this problem by doing the following:
> The RM should not consider an app to be fully completed (and thus removed 
> from its history) until the aggregation status has reached a terminal state 
> (e.g. SUCCEEDED, FAILED, TIME_OUT).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9538) Document scheduler/app activities and REST APIs

2020-01-08 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-9538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17011439#comment-17011439
 ] 

Hadoop QA commented on YARN-9538:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
37s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
32m 24s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
15s{color} | {color:red} hadoop-yarn-site in the patch failed. {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 2 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 53s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 48m 20s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.5 Server=19.03.5 Image:yetus/hadoop:c44943d1fc3 |
| JIRA Issue | YARN-9538 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12990371/YARN-9538.002.patch |
| Optional Tests |  dupname  asflicense  mvnsite  |
| uname | Linux dd96d2ac90cf 4.15.0-66-generic #75-Ubuntu SMP Tue Oct 1 
05:24:09 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 8fe01db |
| maven | version: Apache Maven 3.3.9 |
| mvnsite | 
https://builds.apache.org/job/PreCommit-YARN-Build/25356/artifact/out/patch-mvnsite-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-site.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-YARN-Build/25356/artifact/out/whitespace-eol.txt
 |
| Max. process+thread count | 325 (vs. ulimit of 5500) |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/25356/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Document scheduler/app activities and REST APIs
> ---
>
> Key: YARN-9538
> URL: https://issues.apache.org/jira/browse/YARN-9538
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Tao Yang
>Assignee: Tao Yang
>Priority: Major
> Attachments: YARN-9538.001.patch, YARN-9538.002.patch
>
>
> Add documentation for scheduler/app activities in CapacityScheduler.md and 
> ResourceManagerRest.md.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9567) Add diagnostics for outstanding resource requests on app attempts page

2020-01-08 Thread Weiwei Yang (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-9567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17011437#comment-17011437
 ] 

Weiwei Yang commented on YARN-9567:
---

hi [~Tao Yang]

The screenshots look good.

I am not a UI expert, just want to make sure a few cases are covered by the 
patch 
 # since this is a CS only feature, pls make sure nothing breaks when FS is 
enabled
 # does the table support paging?  

> Add diagnostics for outstanding resource requests on app attempts page
> --
>
> Key: YARN-9567
> URL: https://issues.apache.org/jira/browse/YARN-9567
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacityscheduler
>Reporter: Tao Yang
>Assignee: Tao Yang
>Priority: Major
> Attachments: YARN-9567.001.patch, YARN-9567.002.patch, 
> image-2019-06-04-17-29-29-368.png, image-2019-06-04-17-31-31-820.png, 
> image-2019-06-04-17-58-11-886.png, image-2019-06-14-11-21-41-066.png, 
> no_diagnostic_at_first.png, 
> show_diagnostics_after_requesting_app_activities_REST_API.png
>
>
> Currently on app attempt page we can see outstanding resource requests, it 
> will be helpful for users to know why if we can join diagnostics of this app 
> with these. 
> Discussed with [~cheersyang], we can passively load diagnostics from cache of 
> completed app activities instead of actively triggering which may bring 
> uncontrollable risks.
> For example:
> (1) At first we can see no diagnostic in cache if app activities not 
> triggered below the outstanding requests.
> !no_diagnostic_at_first.png|width=793,height=248!
> (2) After requesting the application activities REST API, we can see 
> diagnostics now.
> !show_diagnostics_after_requesting_app_activities_REST_API.png|width=1046,height=276!
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9698) [Umbrella] Tools to help migration from Fair Scheduler to Capacity Scheduler

2020-01-08 Thread Brahma Reddy Battula (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-9698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17011431#comment-17011431
 ] 

Brahma Reddy Battula commented on YARN-9698:


[~pbacsko] thanks for prompt reply. Planning release mid of March. Can you mark 
this jira's target version?

> [Umbrella] Tools to help migration from Fair Scheduler to Capacity Scheduler
> 
>
> Key: YARN-9698
> URL: https://issues.apache.org/jira/browse/YARN-9698
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacity scheduler
>Reporter: Weiwei Yang
>Priority: Major
>  Labels: fs2cs
> Attachments: FS-CS Migration.pdf
>
>
> We see some users want to migrate from Fair Scheduler to Capacity Scheduler, 
> this Jira is created as an umbrella to track all related efforts for the 
> migration, the scope contains
>  * Bug fixes
>  * Add missing features
>  * Migration tools that help to generate CS configs based on FS, validate 
> configs etc
>  * Documents
> this is part of CS component, the purpose is to make the migration process 
> smooth.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9050) [Umbrella] Usability improvements for scheduler activities

2020-01-08 Thread Brahma Reddy Battula (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-9050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17011424#comment-17011424
 ] 

Brahma Reddy Battula commented on YARN-9050:


thanks for prompt reply.. Planning release 3.3.0 by March Mid, Hope we can 
finish by that time.

> [Umbrella] Usability improvements for scheduler activities
> --
>
> Key: YARN-9050
> URL: https://issues.apache.org/jira/browse/YARN-9050
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacityscheduler
>Reporter: Tao Yang
>Assignee: Tao Yang
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: image-2018-11-23-16-46-38-138.png
>
>
> We have did some usability improvements for scheduler activities based on 
> YARN3.1 in our cluster as follows:
>  1. Not available for multi-thread asynchronous scheduling. App and node 
> activities maybe confused when multiple scheduling threads record activities 
> of different allocation processes in the same variables like appsAllocation 
> and recordingNodesAllocation in ActivitiesManager. I think these variables 
> should be thread-local to make activities clear among multiple threads.
>  2. Incomplete activities for multi-node lookup mechanism, since 
> ActivitiesLogger will skip recording through \{{if (node == null || 
> activitiesManager == null) }} when node is null which represents this 
> allocation is for multi-nodes. We need support recording activities for 
> multi-node lookup mechanism.
>  3. Current app activities can not meet requirements of diagnostics, for 
> example, we can know that node doesn't match request but hard to know why, 
> especially when using placement constraints, it's difficult to make a 
> detailed diagnosis manually. So I propose to improve the diagnoses of 
> activities, add diagnosis for placement constraints check, update 
> insufficient resource diagnosis with detailed info (like 'insufficient 
> resource names:[memory-mb]') and so on.
>  4. Add more useful fields for app activities, in some scenarios we need to 
> distinguish different requests but can't locate requests based on app 
> activities info, there are some other fields can help to filter what we want 
> such as allocation tags. We have added containerPriority, allocationRequestId 
> and allocationTags fields in AppAllocation.
>  5. Filter app activities by key fields, sometimes the results of app 
> activities is massive, it's hard to find what we want. We have support filter 
> by allocation-tags to meet requirements from some apps, more over, we can 
> take container-priority and allocation-request-id as candidates if necessary.
>  6. Aggregate app activities by diagnoses. For a single allocation process, 
> activities still can be massive in a large cluster, we frequently want to 
> know why request can't be allocated in cluster, it's hard to check every node 
> manually in a large cluster, so that aggregation for app activities by 
> diagnoses is necessary to solve this trouble. We have added groupingType 
> parameter for app-activities REST API for this, supports grouping by 
> diagnostics.
> I think we can have a discuss about these points, useful improvements which 
> can be accepted will be added into the patch. Thanks.
> Running design doc is attached 
> [here|https://docs.google.com/document/d/1pwf-n3BCLW76bGrmNPM4T6pQ3vC4dVMcN2Ud1hq1t2M/edit#heading=h.2jnaobmmfne5].



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Resolved] (YARN-5542) Scheduling of opportunistic containers

2020-01-08 Thread Brahma Reddy Battula (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-5542?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula resolved YARN-5542.

Fix Version/s: 3.3.0
   Resolution: Fixed

Resolving all of this subtasks are closed.

> Scheduling of opportunistic containers
> --
>
> Key: YARN-5542
> URL: https://issues.apache.org/jira/browse/YARN-5542
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Konstantinos Karanasos
>Priority: Major
> Fix For: 3.3.0
>
>
> This JIRA groups all efforts related to the scheduling of opportunistic 
> containers. 
> It includes the scheduling of opportunistic container through the central RM 
> (YARN-5220), through distributed scheduling (YARN-2877), as well as the 
> scheduling of containers based on actual node utilization (YARN-1011) and the 
> container promotion/demotion (YARN-5085).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5542) Scheduling of opportunistic containers

2020-01-08 Thread Brahma Reddy Battula (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-5542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17011418#comment-17011418
 ] 

Brahma Reddy Battula commented on YARN-5542:


Yes, I think, we can close this. Thanks [~abmodi] and [~kkaranasos]

> Scheduling of opportunistic containers
> --
>
> Key: YARN-5542
> URL: https://issues.apache.org/jira/browse/YARN-5542
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Konstantinos Karanasos
>Priority: Major
>
> This JIRA groups all efforts related to the scheduling of opportunistic 
> containers. 
> It includes the scheduling of opportunistic container through the central RM 
> (YARN-5220), through distributed scheduling (YARN-2877), as well as the 
> scheduling of containers based on actual node utilization (YARN-1011) and the 
> container promotion/demotion (YARN-5085).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9414) Application Catalog for YARN applications

2020-01-08 Thread Brahma Reddy Battula (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-9414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17011413#comment-17011413
 ] 

Brahma Reddy Battula commented on YARN-9414:


[~eyang] thanks for prompt reply. Ok, I will consider this.

> Application Catalog for YARN applications
> -
>
> Key: YARN-9414
> URL: https://issues.apache.org/jira/browse/YARN-9414
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: YARN Appstore.pdf, YARN-Application-Catalog.pdf
>
>
> YARN native services provides web services API to improve usability of 
> application deployment on Hadoop using collection of docker images.  It would 
> be nice to have an application catalog system which provides an editorial and 
> search interface for YARN applications.  This improves usability of YARN for 
> manage the life cycle of applications.  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9538) Document scheduler/app activities and REST APIs

2020-01-08 Thread Tao Yang (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-9538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17011387#comment-17011387
 ] 

Tao Yang commented on YARN-9538:


Attached v2 patch which have been checked via hugo in my local test environment.

> Document scheduler/app activities and REST APIs
> ---
>
> Key: YARN-9538
> URL: https://issues.apache.org/jira/browse/YARN-9538
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Tao Yang
>Assignee: Tao Yang
>Priority: Major
> Attachments: YARN-9538.001.patch, YARN-9538.002.patch
>
>
> Add documentation for scheduler/app activities in CapacityScheduler.md and 
> ResourceManagerRest.md.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9014) runC container runtime

2020-01-08 Thread Brahma Reddy Battula (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-9014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17011386#comment-17011386
 ] 

Brahma Reddy Battula commented on YARN-9014:


[~ebadger] thanks for prompt reply. Ok, I will consider this feature are merged 
partially.

> runC container runtime
> --
>
> Key: YARN-9014
> URL: https://issues.apache.org/jira/browse/YARN-9014
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Jason Darrell Lowe
>Assignee: Eric Badger
>Priority: Major
>  Labels: Docker
> Attachments: OciSquashfsRuntime.v001.pdf, 
> RuncContainerRuntime.v002.pdf
>
>
> This JIRA tracks a YARN container runtime that supports running containers in 
> images built by Docker but the runtime does not use Docker directly, and 
> Docker does not have to be installed on the nodes.  The runtime leverages the 
> [OCI runtime standard|https://github.com/opencontainers/runtime-spec] to 
> launch containers, so an OCI-compliant runtime like {{runc}} is required.  
> {{runc}} has the benefit of not requiring a daemon like {{dockerd}} to be 
> running in order to launch/control containers.
> The layers comprising the Docker image are uploaded to HDFS as 
> [squashfs|http://tldp.org/HOWTO/SquashFS-HOWTO/whatis.html] images, enabling 
> the runtime to efficiently download and execute directly on the compressed 
> layers.  This saves image unpack time and space on the local disk.  The image 
> layers, like other entries in the YARN distributed cache, can be spread 
> across the YARN local disks, increasing the available space for storing 
> container images on each node.
> A design document will be posted shortly.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9052) Replace all MockRM submit method definitions with a builder

2020-01-08 Thread Sunil G (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-9052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17011384#comment-17011384
 ] 

Sunil G commented on YARN-9052:
---

MockRM was one of the confused test class in YARN which was used and maintained 
very poorly from longer time. I have seen a variation 20+ submitApp calls, and 
when i wanted to add a new parameter such app priority or app timeout, it took 
more pain to have an additional submitApp call to be written and change 20 + 
method signature. I think many of us in the past went through this.

Hence when an effort was made to make this cleaner and more easier to maintain 
for future, I was happy to see that. And moreover it was helping for any future 
usage of MockRM in a much more cleaner way.

Being said that, there seems to be some short term challenges due to this, and 
considering it was for much better purpose, I definitely back to [~ebadger]'s 
point here and support such activities. However contributor and committer can 
also may communicate the same this mailing lists so that a surprise can be 
avoided. And it will reach to all members about such an ongoing activity. And 
they can chime in and share the thoughts in such short improvement projects. 
Apologising as this step was missed from my side, and will take care of such 
things in future.

> Replace all MockRM submit method definitions with a builder
> ---
>
> Key: YARN-9052
> URL: https://issues.apache.org/jira/browse/YARN-9052
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Szilard Nemeth
>Assignee: Szilard Nemeth
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: 
> YARN-9052-004withlogs-patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt,
>  YARN-9052-testlogs003-justfailed.txt, 
> YARN-9052-testlogs003-patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt,
>  YARN-9052-testlogs004-justfailed.txt, YARN-9052.001.patch, 
> YARN-9052.002.patch, YARN-9052.003.patch, YARN-9052.004.patch, 
> YARN-9052.004.withlogs.patch, YARN-9052.005.patch, YARN-9052.006.patch, 
> YARN-9052.007.patch, YARN-9052.008.patch, YARN-9052.009.patch, 
> YARN-9052.009.patch, YARN-9052.testlogs.002.patch, 
> YARN-9052.testlogs.002.patch, YARN-9052.testlogs.003.patch, 
> YARN-9052.testlogs.patch
>
>
> MockRM has 31 definitions of submitApp, most of them having more than 
> acceptable number of parameters, ranging from 2 to even 22 parameters, which 
> makes the code completely unreadable.
> On top of unreadability, it's very hard to follow what RmApp will be produced 
> for tests as they often pass a lot of empty / null values as parameters.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9538) Document scheduler/app activities and REST APIs

2020-01-08 Thread Tao Yang (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-9538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tao Yang updated YARN-9538:
---
Attachment: YARN-9538.002.patch

> Document scheduler/app activities and REST APIs
> ---
>
> Key: YARN-9538
> URL: https://issues.apache.org/jira/browse/YARN-9538
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Tao Yang
>Assignee: Tao Yang
>Priority: Major
> Attachments: YARN-9538.001.patch, YARN-9538.002.patch
>
>
> Add documentation for scheduler/app activities in CapacityScheduler.md and 
> ResourceManagerRest.md.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9512) [JDK11] TestAuxServices#testCustomizedAuxServiceClassPath ClassCastException: class jdk.internal.loader.ClassLoaders$AppClassLoader cannot be cast to class java.net.URLC

2020-01-08 Thread Akira Ajisaka (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-9512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17011374#comment-17011374
 ] 

Akira Ajisaka commented on YARN-9512:
-

Hi [~snemeth], how is this issue going? I'd like to take this over.

> [JDK11] TestAuxServices#testCustomizedAuxServiceClassPath ClassCastException: 
> class jdk.internal.loader.ClassLoaders$AppClassLoader cannot be cast to class 
> java.net.URLClassLoader
> ---
>
> Key: YARN-9512
> URL: https://issues.apache.org/jira/browse/YARN-9512
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: test
>Reporter: Siyao Meng
>Assignee: Szilard Nemeth
>Priority: Major
>
> Found in maven JDK 11 unit test run. Compiled on JDK 8:
> {code}
> [ERROR] 
> testCustomizedAuxServiceClassPath(org.apache.hadoop.yarn.server.nodemanager.containermanager.TestAuxServices)
>   Time elapsed: 0.019 s  <<< ERROR!java.lang.ClassCastException: class 
> jdk.internal.loader.ClassLoaders$AppClassLoader cannot be cast to class 
> java.net.URLClassLoader (jdk.internal.loader.ClassLoaders$AppClassLoader and 
> java.net.URLClassLoader are in module java.base of loader 'bootstrap')
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.TestAuxServices$ServiceC.getMetaData(TestAuxServices.java:197)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices.serviceStart(AuxServices.java:315)
> at 
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.TestAuxServices.testCustomizedAuxServiceClassPath(TestAuxServices.java:344)
> at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.base/java.lang.reflect.Method.invoke(Method.java:566)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10080) Support show app id on localizer thread pool

2020-01-08 Thread zhoukang (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10080?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhoukang updated YARN-10080:

Attachment: YARN-10080-001.patch

> Support show app id on localizer thread pool
> 
>
> Key: YARN-10080
> URL: https://issues.apache.org/jira/browse/YARN-10080
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Reporter: zhoukang
>Assignee: zhoukang
>Priority: Major
> Attachments: YARN-10080-001.patch
>
>
> Currently when we are troubleshooting a container localizer issue, if we want 
> to analyze the jstack with thread detail, we can not figure out which thread 
> is processing the given container. So i want to add app id on the thread name



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10080) Support show app id on localizer thread pool

2020-01-08 Thread zhoukang (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10080?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhoukang updated YARN-10080:

Summary: Support show app id on localizer thread pool  (was: Support show 
container id on localizer thread pool)

> Support show app id on localizer thread pool
> 
>
> Key: YARN-10080
> URL: https://issues.apache.org/jira/browse/YARN-10080
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Reporter: zhoukang
>Assignee: zhoukang
>Priority: Major
>
> Currently when we are troubleshooting a container localizer issue, if we want 
> to analyze the jstack with thread detail, we can not figure out which thread 
> is processing the given container. So i want to add container id on the 
> thread name



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10080) Support show app id on localizer thread pool

2020-01-08 Thread zhoukang (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10080?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhoukang updated YARN-10080:

Description: Currently when we are troubleshooting a container localizer 
issue, if we want to analyze the jstack with thread detail, we can not figure 
out which thread is processing the given container. So i want to add app id on 
the thread name  (was: Currently when we are troubleshooting a container 
localizer issue, if we want to analyze the jstack with thread detail, we can 
not figure out which thread is processing the given container. So i want to add 
container id on the thread name)

> Support show app id on localizer thread pool
> 
>
> Key: YARN-10080
> URL: https://issues.apache.org/jira/browse/YARN-10080
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Reporter: zhoukang
>Assignee: zhoukang
>Priority: Major
>
> Currently when we are troubleshooting a container localizer issue, if we want 
> to analyze the jstack with thread detail, we can not figure out which thread 
> is processing the given container. So i want to add app id on the thread name



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-10080) Support show container id on localizer thread pool

2020-01-08 Thread zhoukang (Jira)
zhoukang created YARN-10080:
---

 Summary: Support show container id on localizer thread pool
 Key: YARN-10080
 URL: https://issues.apache.org/jira/browse/YARN-10080
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: nodemanager
Reporter: zhoukang
Assignee: zhoukang


Currently when we are troubleshooting a container localizer issue, if we want 
to analyze the jstack with thread detail, we can not figure out which thread is 
processing the given container. So i want to add container id on the thread name



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9390) Add support for configurable Resource Calculator in Opportunistic Scheduler.

2020-01-08 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-9390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17011358#comment-17011358
 ] 

Hadoop QA commented on YARN-9390:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  7s{color} 
| {color:red} YARN-9390 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-9390 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12962577/YARN-9390.001.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/25355/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Add support for configurable Resource Calculator in Opportunistic Scheduler.
> 
>
> Key: YARN-9390
> URL: https://issues.apache.org/jira/browse/YARN-9390
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Abhishek Modi
>Assignee: Abhishek Modi
>Priority: Major
> Attachments: YARN-9390.001.patch
>
>
> Right now, Opportunistic scheduler uses hard coded DominantResourceCalculator 
> and there is no option to change it to other resource calculators. This Jira 
> is to make resource calculator configurable for Opportunistic scheduler.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5542) Scheduling of opportunistic containers

2020-01-08 Thread Abhishek Modi (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-5542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17011357#comment-17011357
 ] 

Abhishek Modi commented on YARN-5542:
-

[~brahmareddy] [~kkaranasos] - I have moved remaining open jiras to YARN-10079. 
Should we close this Jira now as all the sub tasks under it are completed?

> Scheduling of opportunistic containers
> --
>
> Key: YARN-5542
> URL: https://issues.apache.org/jira/browse/YARN-5542
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Konstantinos Karanasos
>Priority: Major
>
> This JIRA groups all efforts related to the scheduling of opportunistic 
> containers. 
> It includes the scheduling of opportunistic container through the central RM 
> (YARN-5220), through distributed scheduling (YARN-2877), as well as the 
> scheduling of containers based on actual node utilization (YARN-1011) and the 
> container promotion/demotion (YARN-5085).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-2886) Estimating waiting time in NM container queues

2020-01-08 Thread Abhishek Modi (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-2886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhishek Modi updated YARN-2886:

Parent Issue: YARN-10079  (was: YARN-5542)

> Estimating waiting time in NM container queues
> --
>
> Key: YARN-2886
> URL: https://issues.apache.org/jira/browse/YARN-2886
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
>Priority: Major
>
> This JIRA is about estimating the waiting time of each NM queue.
> Having these estimates is crucial for the distributed scheduling of container 
> requests, as it allows the LocalRM to decide in which NMs to queue the 
> queuable container requests.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7604) Fix some minor typos in the opportunistic container logging

2020-01-08 Thread Abhishek Modi (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-7604?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhishek Modi updated YARN-7604:

Parent Issue: YARN-10079  (was: YARN-5542)

> Fix some minor typos in the opportunistic container logging
> ---
>
> Key: YARN-7604
> URL: https://issues.apache.org/jira/browse/YARN-7604
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 2.9.0
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Trivial
> Attachments: YARN-7604.01.patch
>
>
> Fix some minor text issues. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5414) Integrate NodeQueueLoadMonitor with ClusterNodeTracker

2020-01-08 Thread Abhishek Modi (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-5414?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhishek Modi updated YARN-5414:

Parent Issue: YARN-10079  (was: YARN-5542)

> Integrate NodeQueueLoadMonitor with ClusterNodeTracker
> --
>
> Key: YARN-5414
> URL: https://issues.apache.org/jira/browse/YARN-5414
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: container-queuing, distributed-scheduling, scheduler
>Reporter: Arun Suresh
>Assignee: Abhishek Modi
>Priority: Major
>
> The {{ClusterNodeTracker}} tracks the states of clusterNodes and provides 
> convenience methods like sort and filter.
> The {{NodeQueueLoadMonitor}} should use the {{ClusterNodeTracker}} instead of 
> maintaining its own data-structure of node information.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5688) Make allocation of opportunistic containers asynchronous

2020-01-08 Thread Abhishek Modi (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-5688?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhishek Modi updated YARN-5688:

Parent Issue: YARN-10079  (was: YARN-5542)

> Make allocation of opportunistic containers asynchronous
> 
>
> Key: YARN-5688
> URL: https://issues.apache.org/jira/browse/YARN-5688
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Konstantinos Karanasos
>Assignee: Abhishek Modi
>Priority: Major
>
> In the current implementation of the 
> {{OpportunisticContainerAllocatorAMService}}, we synchronously perform the 
> allocation of opportunistic containers. This results in "blocking" the 
> service at the RM when scheduling the opportunistic containers.
> The {{OpportunisticContainerAllocator}} should instead asynchronously run as 
> a separate thread.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9941) Opportunistic scheduler metrics should be reset during fail-over.

2020-01-08 Thread Abhishek Modi (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-9941?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhishek Modi updated YARN-9941:

Parent Issue: YARN-10079  (was: YARN-5542)

> Opportunistic scheduler metrics should be reset during fail-over.
> -
>
> Key: YARN-9941
> URL: https://issues.apache.org/jira/browse/YARN-9941
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Abhishek Modi
>Assignee: Abhishek Modi
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9390) Add support for configurable Resource Calculator in Opportunistic Scheduler.

2020-01-08 Thread Abhishek Modi (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-9390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhishek Modi updated YARN-9390:

Parent Issue: YARN-10079  (was: YARN-5542)

> Add support for configurable Resource Calculator in Opportunistic Scheduler.
> 
>
> Key: YARN-9390
> URL: https://issues.apache.org/jira/browse/YARN-9390
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Abhishek Modi
>Assignee: Abhishek Modi
>Priority: Major
> Attachments: YARN-9390.001.patch
>
>
> Right now, Opportunistic scheduler uses hard coded DominantResourceCalculator 
> and there is no option to change it to other resource calculators. This Jira 
> is to make resource calculator configurable for Opportunistic scheduler.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-10079) Scheduling of opportunistic containers - Phase 2

2020-01-08 Thread Abhishek Modi (Jira)
Abhishek Modi created YARN-10079:


 Summary: Scheduling of opportunistic containers - Phase 2
 Key: YARN-10079
 URL: https://issues.apache.org/jira/browse/YARN-10079
 Project: Hadoop YARN
  Issue Type: New Feature
Reporter: Abhishek Modi


This JIRA groups all efforts related to the improvements of scheduling of 
opportunistic containers.

Phase 1 of this was done as part of YARN-5542.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5542) Scheduling of opportunistic containers

2020-01-08 Thread Abhishek Modi (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-5542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17011350#comment-17011350
 ] 

Abhishek Modi commented on YARN-5542:
-

[~kkaranasos] [~brahmareddy] I will move remaining open jiras to new Jira and 
then we should be good to close this. We have completed current set of 
improvements.

> Scheduling of opportunistic containers
> --
>
> Key: YARN-5542
> URL: https://issues.apache.org/jira/browse/YARN-5542
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Konstantinos Karanasos
>Priority: Major
>
> This JIRA groups all efforts related to the scheduling of opportunistic 
> containers. 
> It includes the scheduling of opportunistic container through the central RM 
> (YARN-5220), through distributed scheduling (YARN-2877), as well as the 
> scheduling of containers based on actual node utilization (YARN-1011) and the 
> container promotion/demotion (YARN-5085).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9511) [JDK11] TestAuxServices#testRemoteAuxServiceClassPath YarnRuntimeException: The remote jarfile should not be writable by group or others. The current Permission is 436

2020-01-08 Thread Akira Ajisaka (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-9511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17011312#comment-17011312
 ] 

Akira Ajisaka commented on YARN-9511:
-

bq. Also could please confirm that you are using JDK11 (this issue is primarily 
about the JDK11 related part).

I cannot reproduce this issue in OpenJDK 11.0.3 + Mac with umask 022. Can I 
remove [JDK11] header from this issue?

> [JDK11] TestAuxServices#testRemoteAuxServiceClassPath YarnRuntimeException: 
> The remote jarfile should not be writable by group or others. The current 
> Permission is 436
> ---
>
> Key: YARN-9511
> URL: https://issues.apache.org/jira/browse/YARN-9511
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: test
>Reporter: Siyao Meng
>Assignee: Szilard Nemeth
>Priority: Major
>
> Found in maven JDK 11 unit test run. Compiled on JDK 8.
> {code}
> [ERROR] 
> testRemoteAuxServiceClassPath(org.apache.hadoop.yarn.server.nodemanager.containermanager.TestAuxServices)
>   Time elapsed: 0.551 s  <<< 
> ERROR!org.apache.hadoop.yarn.exceptions.YarnRuntimeException: The remote 
> jarfile should not be writable by group or others. The current Permission is 
> 436
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices.serviceInit(AuxServices.java:202)
> at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.TestAuxServices.testRemoteAuxServiceClassPath(TestAuxServices.java:268)
> at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.base/java.lang.reflect.Method.invoke(Method.java:566)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
> at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
> at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
> at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
> at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
> at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
> at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
> at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
> at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
> at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
> at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
> at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
> at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
> at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
> at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8283) [Umbrella] MaWo - A Master Worker framework on top of YARN Services

2020-01-08 Thread Eric Yang (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-8283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17011068#comment-17011068
 ] 

Eric Yang commented on YARN-8283:
-

[~brahmareddy] This looks like a feature that will not be closed by 3.3.0 
release.  There are check style errors in the patches, which is the reason that 
I did not commit them.  Python 2.7 is deprecated on Jan 1, 2020.  This 
contribution will need some updates to keep it going.  Please skip this feature 
in the release notes.  Thanks

> [Umbrella] MaWo - A Master Worker framework on top of YARN Services
> ---
>
> Key: YARN-8283
> URL: https://issues.apache.org/jira/browse/YARN-8283
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Yesha Vora
>Assignee: Yesha Vora
>Priority: Major
> Attachments: [Design Doc] [YARN-8283] MaWo - A Master Worker 
> framework on top of YARN Services.pdf
>
>
> There is a need for an application / framework to handle Master-Worker 
> scenarios. There are existing frameworks on YARN which can be used to run a 
> job in distributed manner such as Mapreduce, Tez, Spark etc. But 
> master-worker use-cases usually are force-fed into one of these existing 
> frameworks which have been designed primarily around data-parallelism instead 
> of generic Master Worker type of computations.
> In this JIRA, we’d like to contribute MaWo - a YARN Service based framework 
> that achieves this goal. The overall goal is to create an app that can take 
> an input job specification with tasks, their durations and have a Master dish 
> the tasks off to a predetermined set of workers. The components will be 
> responsible for making sure that the tasks and the overall job finish in 
> specific time durations.
> We have been using a version of the MaWo framework for running unit tests of 
> Hadoop in a parallel manner on an existing Hadoop YARN cluster. What 
> typically takes 10 hours to run all of Hadoop project’s unit-tests can finish 
> under 20 minutes on a MaWo app of about 50 containers!
> YARN-3307 was an original attempt at this but through a first-class YARN app. 
> In this JIRA, we instead use YARN Service for orchestration so that our code 
> can focus on the core Master Worker paradigm.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9018) Add functionality to AuxiliaryLocalPathHandler to return all locations to read for a given path

2020-01-08 Thread Eric Payne (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-9018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17011054#comment-17011054
 ] 

Eric Payne commented on YARN-9018:
--

+1. I will commit tomorrow if no objections.

> Add functionality to AuxiliaryLocalPathHandler to return all locations to 
> read for a given path
> ---
>
> Key: YARN-9018
> URL: https://issues.apache.org/jira/browse/YARN-9018
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 3.0.3, 2.8.5
>Reporter: Kuhu Shukla
>Assignee: Kuhu Shukla
>Priority: Major
> Attachments: YARN-9018.001.patch
>
>
> Analogous to LocalDirAllocator#getAllLocalPathsToRead, this will allow aux 
> services(and other components) to use this function that they rely on when 
> using the former class objects.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9018) Add functionality to AuxiliaryLocalPathHandler to return all locations to read for a given path

2020-01-08 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-9018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17011046#comment-17011046
 ] 

Hadoop QA commented on YARN-9018:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
44s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
23s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
18m  8s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
33s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m  
4s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 31s{color} | {color:orange} root: The patch generated 2 new + 75 unchanged - 
0 fixed = 77 total (was 75) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  6s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
54s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 21m 
28s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
38s{color} | {color:green} hadoop-mapreduce-client-shuffle in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
41s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}126m 44s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.5 Server=19.03.5 Image:yetus/hadoop:c44943d1fc3 |
| JIRA Issue | YARN-9018 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12948028/YARN-9018.001.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux ff704004964b 4.15.0-66-generic #75-Ubuntu SMP Tue Oct 1 
05:24:09 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / b1e07d2 |
| maven | version: 

[jira] [Commented] (YARN-9018) Add functionality to AuxiliaryLocalPathHandler to return all locations to read for a given path

2020-01-08 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-9018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17011042#comment-17011042
 ] 

Hadoop QA commented on YARN-9018:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
33s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
24s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
 6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
18m 15s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
33s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 
55s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 36s{color} | {color:orange} root: The patch generated 2 new + 76 unchanged - 
0 fixed = 78 total (was 76) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 55s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
53s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 21m 
28s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
37s{color} | {color:green} hadoop-mapreduce-client-shuffle in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
41s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}125m 25s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.5 Server=19.03.5 Image:yetus/hadoop:c44943d1fc3 |
| JIRA Issue | YARN-9018 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12948028/YARN-9018.001.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux fbc9b0fec902 4.15.0-66-generic #75-Ubuntu SMP Tue Oct 1 
05:24:09 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / b1e07d2 |
| maven | version: 

[jira] [Commented] (YARN-5542) Scheduling of opportunistic containers

2020-01-08 Thread Konstantinos Karanasos (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-5542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17011026#comment-17011026
 ] 

Konstantinos Karanasos commented on YARN-5542:
--

Hi [~brahmareddy], we could move the JIRAs that are still open to a new 
umbrella JIRA so that we can close this. Would that make sense?

[~abmodi], are you actively working on any of these subtasks?

> Scheduling of opportunistic containers
> --
>
> Key: YARN-5542
> URL: https://issues.apache.org/jira/browse/YARN-5542
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Konstantinos Karanasos
>Priority: Major
>
> This JIRA groups all efforts related to the scheduling of opportunistic 
> containers. 
> It includes the scheduling of opportunistic container through the central RM 
> (YARN-5220), through distributed scheduling (YARN-2877), as well as the 
> scheduling of containers based on actual node utilization (YARN-1011) and the 
> container promotion/demotion (YARN-5085).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9052) Replace all MockRM submit method definitions with a builder

2020-01-08 Thread Eric Badger (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-9052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17011005#comment-17011005
 ] 

Eric Badger commented on YARN-9052:
---

I'm sure we all agree that there's a balancing art of invasive code cleanup vs 
increasing tech debt. And I'm also sure that everyone will have their slightly 
own differing opinion on which of those is more important. It might be a good 
idea to get more eyes on this broud issue by posting to the larger Hadoop 
mailing list. But I'll add my 2 cents here. 

I generally like to follow the principal of "if something is hard, do it 
frequently" or whatever version of that phrase that you prefer. If someone 
wants to make changes that are good for the long term health of Hadoop, I don't 
want to discourage that. I understand that that means increased pain in the 
short term, but I believe that it promotes health in the long term. The code 
cleanup is the "something that is hard" in this case. If we wait for years for 
stuff to pile up and then try to deal with everything all at once, then every 
committer will basically have a full time job of refactoring code for a few 
months until we're back into a reasonable state. However, if we do it more 
frequently, we can fix some things, but have each chunk much smaller and more 
tenable. But I'm also not going to say that filing 10+ JIRAs in succession is a 
reasonable rate to do code cleanups. Maybe once-in-a-while it isn't that bad, 
though.

I do believe that code cleanup/refactoring is important for the health of 
longterm code, even if the end result of each individual patch has a net result 
of not changing functionality. It makes the code easier to work with in the 
future, and oftentimes makes it simpler or less confusing, which leads to a 
decreased likelihood of bugs being introduced. When code is written poorly, 
trying to modify that code is a nightmare and bugs are created because it's so 
hard to follow what's going on in the code. So while the immediate effect is 
purely negative (patches don't apply anymore, branches diverge), the long-term 
effect (easier code to manage/debug) is positive. It all depends on what you're 
optimizing for. 

I agree with the sentiment that we should treat each code cleanup as a bug 
fix/feature in that we need to test the code vigorously even if _nothing 
changed, we just refactored_. Especially when dealing with non-test code, 
cleanups and refactors need to be done with a huge amount of detail and care., 
since they risk breaking existing functionality. But this is no different than 
any other patch that someone could put up and commit. There is always a risk of 
breaking existing functionality. We need to trust committers to do proper 
testing and not commit code that is too risky for little benefit. 

All of what I've said above assumes that the code cleanup/refactoring is 
actually making the cleaner, more readable, and easier to modify/fix. Any 
changes that do not meet that criteria should be closed as Won't Fix since they 
do not give short term or long term benefit.

> Replace all MockRM submit method definitions with a builder
> ---
>
> Key: YARN-9052
> URL: https://issues.apache.org/jira/browse/YARN-9052
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Szilard Nemeth
>Assignee: Szilard Nemeth
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: 
> YARN-9052-004withlogs-patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt,
>  YARN-9052-testlogs003-justfailed.txt, 
> YARN-9052-testlogs003-patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt,
>  YARN-9052-testlogs004-justfailed.txt, YARN-9052.001.patch, 
> YARN-9052.002.patch, YARN-9052.003.patch, YARN-9052.004.patch, 
> YARN-9052.004.withlogs.patch, YARN-9052.005.patch, YARN-9052.006.patch, 
> YARN-9052.007.patch, YARN-9052.008.patch, YARN-9052.009.patch, 
> YARN-9052.009.patch, YARN-9052.testlogs.002.patch, 
> YARN-9052.testlogs.002.patch, YARN-9052.testlogs.003.patch, 
> YARN-9052.testlogs.patch
>
>
> MockRM has 31 definitions of submitApp, most of them having more than 
> acceptable number of parameters, ranging from 2 to even 22 parameters, which 
> makes the code completely unreadable.
> On top of unreadability, it's very hard to follow what RmApp will be produced 
> for tests as they often pass a lot of empty / null values as parameters.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-9018) Add functionality to AuxiliaryLocalPathHandler to return all locations to read for a given path

2020-01-08 Thread Eric Payne (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-9018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17011001#comment-17011001
 ] 

Eric Payne edited comment on YARN-9018 at 1/8/20 8:36 PM:
--

The patch still applies and builds for me. I kicked the precommit build


was (Author: eepayne):
I kicked the precommit build

> Add functionality to AuxiliaryLocalPathHandler to return all locations to 
> read for a given path
> ---
>
> Key: YARN-9018
> URL: https://issues.apache.org/jira/browse/YARN-9018
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 3.0.3, 2.8.5
>Reporter: Kuhu Shukla
>Assignee: Kuhu Shukla
>Priority: Major
> Attachments: YARN-9018.001.patch
>
>
> Analogous to LocalDirAllocator#getAllLocalPathsToRead, this will allow aux 
> services(and other components) to use this function that they rely on when 
> using the former class objects.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9018) Add functionality to AuxiliaryLocalPathHandler to return all locations to read for a given path

2020-01-08 Thread Eric Payne (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-9018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17011001#comment-17011001
 ] 

Eric Payne commented on YARN-9018:
--

I kicked the precommit build

> Add functionality to AuxiliaryLocalPathHandler to return all locations to 
> read for a given path
> ---
>
> Key: YARN-9018
> URL: https://issues.apache.org/jira/browse/YARN-9018
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 3.0.3, 2.8.5
>Reporter: Kuhu Shukla
>Assignee: Kuhu Shukla
>Priority: Major
> Attachments: YARN-9018.001.patch
>
>
> Analogous to LocalDirAllocator#getAllLocalPathsToRead, this will allow aux 
> services(and other components) to use this function that they rely on when 
> using the former class objects.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9018) Add functionality to AuxiliaryLocalPathHandler to return all locations to read for a given path

2020-01-08 Thread Eric Payne (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-9018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17010996#comment-17010996
 ] 

Eric Payne commented on YARN-9018:
--

We've been running with this internally for a long time now. I'd like to see it 
get into the Apache source base.

> Add functionality to AuxiliaryLocalPathHandler to return all locations to 
> read for a given path
> ---
>
> Key: YARN-9018
> URL: https://issues.apache.org/jira/browse/YARN-9018
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 3.0.3, 2.8.5
>Reporter: Kuhu Shukla
>Assignee: Kuhu Shukla
>Priority: Major
> Attachments: YARN-9018.001.patch
>
>
> Analogous to LocalDirAllocator#getAllLocalPathsToRead, this will allow aux 
> services(and other components) to use this function that they rely on when 
> using the former class objects.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8672) TestContainerManager#testLocalingResourceWhileContainerRunning occasionally times out

2020-01-08 Thread Jim Brennan (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-8672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17010993#comment-17010993
 ] 

Jim Brennan commented on YARN-8672:
---

Thanks [~ebadger]!

> TestContainerManager#testLocalingResourceWhileContainerRunning occasionally 
> times out
> -
>
> Key: YARN-8672
> URL: https://issues.apache.org/jira/browse/YARN-8672
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.10.0, 3.2.0
>Reporter: Jason Darrell Lowe
>Assignee: Chandni Singh
>Priority: Major
> Fix For: 3.3.0, 3.2.2, 3.1.4, 2.10.1
>
> Attachments: YARN-8672-branch-2.10.001.patch, 
> YARN-8672-branch-2.10.002.patch, YARN-8672-branch-2.10.003.patch, 
> YARN-8672-branch-3.1.001.patch, YARN-8672-branch-3.2.001.patch, 
> YARN-8672.001.patch, YARN-8672.002.patch, YARN-8672.003.patch, 
> YARN-8672.004.patch, YARN-8672.005.patch, YARN-8672.006.patch, 
> YARN-8672.007.patch, YARN-8672.008.patch
>
>
> Precommit builds have been failing in 
> TestContainerManager#testLocalingResourceWhileContainerRunning.  I have been 
> able to reproduce the problem without any patch applied if I run the test 
> enough times.  It looks like something is removing container tokens from the 
> nmPrivate area just as a new localizer starts.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8672) TestContainerManager#testLocalingResourceWhileContainerRunning occasionally times out

2020-01-08 Thread Eric Badger (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-8672?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Badger updated YARN-8672:
--
   Fix Version/s: 2.10.1
  3.1.4
  3.2.2
Target Version/s: 3.3.0  (was: 3.3.0, 3.2.2, 3.1.4, 2.10.1)

> TestContainerManager#testLocalingResourceWhileContainerRunning occasionally 
> times out
> -
>
> Key: YARN-8672
> URL: https://issues.apache.org/jira/browse/YARN-8672
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.10.0, 3.2.0
>Reporter: Jason Darrell Lowe
>Assignee: Chandni Singh
>Priority: Major
> Fix For: 3.3.0, 3.2.2, 3.1.4, 2.10.1
>
> Attachments: YARN-8672-branch-2.10.001.patch, 
> YARN-8672-branch-2.10.002.patch, YARN-8672-branch-2.10.003.patch, 
> YARN-8672-branch-3.1.001.patch, YARN-8672-branch-3.2.001.patch, 
> YARN-8672.001.patch, YARN-8672.002.patch, YARN-8672.003.patch, 
> YARN-8672.004.patch, YARN-8672.005.patch, YARN-8672.006.patch, 
> YARN-8672.007.patch, YARN-8672.008.patch
>
>
> Precommit builds have been failing in 
> TestContainerManager#testLocalingResourceWhileContainerRunning.  I have been 
> able to reproduce the problem without any patch applied if I run the test 
> enough times.  It looks like something is removing container tokens from the 
> nmPrivate area just as a new localizer starts.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8672) TestContainerManager#testLocalingResourceWhileContainerRunning occasionally times out

2020-01-08 Thread Eric Badger (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-8672?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Badger updated YARN-8672:
--
Target Version/s: 3.3.0, 3.2.2, 3.1.4, 2.10.1  (was: 3.3.0)

Thanks for bearing with me on all these unique patches, [~Jim_Brennan]! I've 
committed them to their respective branches. 

This JIRA has now been committed to trunk (3.3), branch-3.2, branch-3.1, and 
branch-2.10.

> TestContainerManager#testLocalingResourceWhileContainerRunning occasionally 
> times out
> -
>
> Key: YARN-8672
> URL: https://issues.apache.org/jira/browse/YARN-8672
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.10.0, 3.2.0
>Reporter: Jason Darrell Lowe
>Assignee: Chandni Singh
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: YARN-8672-branch-2.10.001.patch, 
> YARN-8672-branch-2.10.002.patch, YARN-8672-branch-2.10.003.patch, 
> YARN-8672-branch-3.1.001.patch, YARN-8672-branch-3.2.001.patch, 
> YARN-8672.001.patch, YARN-8672.002.patch, YARN-8672.003.patch, 
> YARN-8672.004.patch, YARN-8672.005.patch, YARN-8672.006.patch, 
> YARN-8672.007.patch, YARN-8672.008.patch
>
>
> Precommit builds have been failing in 
> TestContainerManager#testLocalingResourceWhileContainerRunning.  I have been 
> able to reproduce the problem without any patch applied if I run the test 
> enough times.  It looks like something is removing container tokens from the 
> nmPrivate area just as a new localizer starts.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9523) Build application catalog docker image as part of hadoop dist build

2020-01-08 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-9523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17010983#comment-17010983
 ] 

Hadoop QA commented on YARN-9523:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
41s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
33m 27s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 45s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
13s{color} | {color:green} hadoop-yarn-applications-catalog-docker in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 50m 39s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.5 Server=19.03.5 Image:yetus/hadoop:c44943d1fc3 |
| JIRA Issue | YARN-9523 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12967721/YARN-9523.001.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  xml  |
| uname | Linux d866514f4bbc 4.15.0-66-generic #75-Ubuntu SMP Tue Oct 1 
05:24:09 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 6899be5 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_232 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/25352/testReport/ |
| Max. process+thread count | 307 (vs. ulimit of 5500) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-catalog/hadoop-yarn-applications-catalog-docker
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-catalog/hadoop-yarn-applications-catalog-docker
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/25352/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Build application catalog docker image as part of hadoop dist build
> ---
>
> Key: YARN-9523
> URL: https://issues.apache.org/jira/browse/YARN-9523
> 

[jira] [Commented] (YARN-10028) Integrate the new abstract log servlet to the JobHistory server

2020-01-08 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17010979#comment-17010979
 ] 

Hadoop QA commented on YARN-10028:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
58s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
41s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m 46s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
25s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 16m 
11s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 40s{color} | {color:orange} root: The patch generated 2 new + 13 unchanged - 
1 fixed = 15 total (was 14) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 58s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
30s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  2m 52s{color} 
| {color:red} hadoop-mapreduce-client-hs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
41s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}107m 31s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.mapreduce.v2.hs.webapp.TestHsWebServicesJobConf |
|   | hadoop.mapreduce.v2.hs.webapp.TestHsWebServicesAttempts |
|   | hadoop.mapreduce.v2.hs.webapp.TestHsWebServices |
|   | hadoop.mapreduce.v2.hs.webapp.TestHsWebServicesJobsQuery |
|   | hadoop.mapreduce.v2.hs.webapp.TestHsWebServicesJobs |
|   | hadoop.mapreduce.v2.hs.webapp.TestHsWebServicesTasks |
|   | hadoop.mapreduce.v2.hs.TestJobHistoryServer |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.5 Server=19.03.5 Image:yetus/hadoop:c44943d1fc3 |
| JIRA Issue | YARN-10028 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12990236/YARN-10028.001.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  

[jira] [Commented] (YARN-9014) runC container runtime

2020-01-08 Thread Eric Badger (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-9014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17010976#comment-17010976
 ] 

Eric Badger commented on YARN-9014:
---

[~brahmareddy], there are still pending JIRAs that will not be fixed in time 
for the 3.3.0 release. However, I would like to keep everything under this 
umbrella as it makes things easier to find and keep track of. The main 
RuncContainerRuntime feature has been implemented and committed via YARN-9560, 
YARN-9561, YARN-9562, and YARN-9884. The pending JIRAs are for extended 
functionality to bring some functionality over from 
DockerLinuxContainerRuntime. However, these JIRAs are not required for the 
initial use case of just running Hadoop jobs inside of runC containers. 

> runC container runtime
> --
>
> Key: YARN-9014
> URL: https://issues.apache.org/jira/browse/YARN-9014
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Jason Darrell Lowe
>Assignee: Eric Badger
>Priority: Major
>  Labels: Docker
> Attachments: OciSquashfsRuntime.v001.pdf, 
> RuncContainerRuntime.v002.pdf
>
>
> This JIRA tracks a YARN container runtime that supports running containers in 
> images built by Docker but the runtime does not use Docker directly, and 
> Docker does not have to be installed on the nodes.  The runtime leverages the 
> [OCI runtime standard|https://github.com/opencontainers/runtime-spec] to 
> launch containers, so an OCI-compliant runtime like {{runc}} is required.  
> {{runc}} has the benefit of not requiring a daemon like {{dockerd}} to be 
> running in order to launch/control containers.
> The layers comprising the Docker image are uploaded to HDFS as 
> [squashfs|http://tldp.org/HOWTO/SquashFS-HOWTO/whatis.html] images, enabling 
> the runtime to efficiently download and execute directly on the compressed 
> layers.  This saves image unpack time and space on the local disk.  The image 
> layers, like other entries in the YARN distributed cache, can be spread 
> across the YARN local disks, increasing the available space for storing 
> container images on each node.
> A design document will be posted shortly.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8672) TestContainerManager#testLocalingResourceWhileContainerRunning occasionally times out

2020-01-08 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-8672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17010975#comment-17010975
 ] 

Hadoop QA commented on YARN-8672:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 19m 
53s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
|| || || || {color:brown} branch-3.1 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
59s{color} | {color:green} branch-3.1 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
56s{color} | {color:green} branch-3.1 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
29s{color} | {color:green} branch-3.1 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
38s{color} | {color:green} branch-3.1 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 27s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
4s{color} | {color:green} branch-3.1 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
29s{color} | {color:green} branch-3.1 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 27s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:
 The patch generated 1 new + 433 unchanged - 1 fixed = 434 total (was 434) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 18s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 18m 
25s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}101m 26s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.5 Server=19.03.5 Image:yetus/hadoop:70a0ef5d4a6 |
| JIRA Issue | YARN-8672 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12990234/YARN-8672-branch-3.1.001.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 4bdb11cd343a 4.15.0-66-generic #75-Ubuntu SMP Tue Oct 1 
05:24:09 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | branch-3.1 / 08a1464 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_232 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/25351/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/25351/testReport/ |
| Max. process+thread count | 343 (vs. ulimit of 5500) |
| modules | C: 

[jira] [Commented] (YARN-7387) org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestIncreaseAllocationExpirer fails intermittently

2020-01-08 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-7387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17010966#comment-17010966
 ] 

Hudson commented on YARN-7387:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17834 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17834/])
YARN-7387: (ericp: rev b1e07d27cc1a26be4e5ebd1ab7b03ef15032bef0)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestIncreaseAllocationExpirer.java


> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestIncreaseAllocationExpirer
>  fails intermittently
> ---
>
> Key: YARN-7387
> URL: https://issues.apache.org/jira/browse/YARN-7387
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Miklos Szegedi
>Assignee: Jim Brennan
>Priority: Major
> Attachments: YARN-7387.001.patch
>
>
> {code}
> Tests run: 4, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 52.481 sec 
> <<< FAILURE! - in 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestIncreaseAllocationExpirer
> testDecreaseAfterIncreaseWithAllocationExpiration(org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestIncreaseAllocationExpirer)
>   Time elapsed: 13.292 sec  <<< FAILURE!
> java.lang.AssertionError: expected:<3072> but was:<4096>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestIncreaseAllocationExpirer.testDecreaseAfterIncreaseWithAllocationExpiration(TestIncreaseAllocationExpirer.java:459)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10072) TestCSAllocateCustomResource failures

2020-01-08 Thread Jim Brennan (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17010952#comment-17010952
 ] 

Jim Brennan commented on YARN-10072:


Thanks [~epayne]!

> TestCSAllocateCustomResource failures
> -
>
> Key: YARN-10072
> URL: https://issues.apache.org/jira/browse/YARN-10072
> Project: Hadoop YARN
>  Issue Type: Test
>  Components: yarn
>Affects Versions: 2.10.0
>Reporter: Jim Brennan
>Assignee: Jim Brennan
>Priority: Major
>  Labels: YARN
> Fix For: 3.3.0, 3.2.2, 3.1.4, 2.10.1
>
> Attachments: YARN-10072.001.patch, YARN-10072.002.patch
>
>
> This test is failing for us consistently in our internal 2.10 based branch.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7387) org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestIncreaseAllocationExpirer fails intermittently

2020-01-08 Thread Eric Payne (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-7387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17010940#comment-17010940
 ] 

Eric Payne commented on YARN-7387:
--

Thanks for the fix, [~Jim_Brennan]. The changes LGTM.
+1

> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestIncreaseAllocationExpirer
>  fails intermittently
> ---
>
> Key: YARN-7387
> URL: https://issues.apache.org/jira/browse/YARN-7387
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Miklos Szegedi
>Assignee: Jim Brennan
>Priority: Major
> Attachments: YARN-7387.001.patch
>
>
> {code}
> Tests run: 4, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 52.481 sec 
> <<< FAILURE! - in 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestIncreaseAllocationExpirer
> testDecreaseAfterIncreaseWithAllocationExpiration(org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestIncreaseAllocationExpirer)
>   Time elapsed: 13.292 sec  <<< FAILURE!
> java.lang.AssertionError: expected:<3072> but was:<4096>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestIncreaseAllocationExpirer.testDecreaseAfterIncreaseWithAllocationExpiration(TestIncreaseAllocationExpirer.java:459)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Resolved] (YARN-9414) Application Catalog for YARN applications

2020-01-08 Thread Eric Yang (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-9414?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang resolved YARN-9414.
-
Fix Version/s: 3.3.0
   Resolution: Fixed

[~brahmareddy] I moved the enhancement to next release.  This feature can go GA 
without the enhancements.

> Application Catalog for YARN applications
> -
>
> Key: YARN-9414
> URL: https://issues.apache.org/jira/browse/YARN-9414
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: YARN Appstore.pdf, YARN-Application-Catalog.pdf
>
>
> YARN native services provides web services API to improve usability of 
> application deployment on Hadoop using collection of docker images.  It would 
> be nice to have an application catalog system which provides an editorial and 
> search interface for YARN applications.  This improves usability of YARN for 
> manage the life cycle of applications.  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9523) Build application catalog docker image as part of hadoop dist build

2020-01-08 Thread Eric Yang (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-9523?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated YARN-9523:

Parent Issue: YARN-10078  (was: YARN-9414)

> Build application catalog docker image as part of hadoop dist build
> ---
>
> Key: YARN-9523
> URL: https://issues.apache.org/jira/browse/YARN-9523
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: YARN-9523.001.patch
>
>
> It would be nice to make Application catalog docker image as part of the 
> distribution.  The suggestion is to change from:
> {code:java}
> mvn clean package -Pnative,dist,docker{code}
> to
> {code:java}
> mvn clean package -Pnative,dist{code}
> User can still build tarball only using:
> {code:java}
> mvn clean package -DskipDocker -DskipTests -DskipShade -Pnative,dist{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8533) Multi-user support for application catalog

2020-01-08 Thread Eric Yang (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-8533?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated YARN-8533:

Parent Issue: YARN-10078  (was: YARN-9414)

> Multi-user support for application catalog
> --
>
> Key: YARN-8533
> URL: https://issues.apache.org/jira/browse/YARN-8533
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-native-services
>Reporter: Eric Yang
>Priority: Major
>
> The current application catalog will launch applications as the user who runs 
> the application catalog.  This allows personalized application catalog.  It 
> would be nice if the application catalog can launch application as the end 
> user who is viewing the application catalog.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9499) Support application catalog high availability

2020-01-08 Thread Eric Yang (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-9499?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated YARN-9499:

Parent Issue: YARN-10078  (was: YARN-9414)

> Support application catalog high availability
> -
>
> Key: YARN-9499
> URL: https://issues.apache.org/jira/browse/YARN-9499
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Yang
>Priority: Major
>
> Application catalog is mostly a stateless web application.  It depends on 
> backend services to store states.  At this time, Solr is a single instance 
> server running in the same application catalog container.  It is possible to 
> externalize application catalog data to Solr Cloud to remove the single 
> instance Solr server.  This improves high availability of application catalog.
> This task is to focus on how to configure connection to external Solr cloud 
> for application catalog container.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8532) Consolidate Yarn UI2 Service View with Application Catalog

2020-01-08 Thread Eric Yang (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-8532?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated YARN-8532:

Parent Issue: YARN-10078  (was: YARN-9414)

> Consolidate Yarn UI2 Service View with Application Catalog
> --
>
> Key: YARN-8532
> URL: https://issues.apache.org/jira/browse/YARN-8532
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-native-services, yarn-ui-v2
>Reporter: Eric Yang
>Priority: Major
>
> There are some overlaps between YARN UI2, and Application Catalog.  The same 
> deployment feature exists in YARN UI2 and Application Catalog.  It would be 
> nice to present the application catalog as the first view to end user to 
> speed up deployment of application.  UI2 is a monitoring and resource 
> allocation and prioritization UI.  It might be more user friendly to transfer 
> UI2 deployment feature into Application Catalog to improve usability for both 
> end user who launches the apps, and system administrator who monitors the 
> apps usage.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8531) Link container logs from App detail page

2020-01-08 Thread Eric Yang (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-8531?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated YARN-8531:

Parent Issue: YARN-10078  (was: YARN-9414)

> Link container logs from App detail page
> 
>
> Key: YARN-8531
> URL: https://issues.apache.org/jira/browse/YARN-8531
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-ui-v2
>Reporter: Eric Yang
>Priority: Major
>
> It would be nice to have visibility of contain log files for running 
> application to be viewable from application detail page.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9414) Application Catalog for YARN applications

2020-01-08 Thread Eric Yang (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-9414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17010893#comment-17010893
 ] 

Eric Yang commented on YARN-9414:
-

Move some enhancement work to next release.

> Application Catalog for YARN applications
> -
>
> Key: YARN-9414
> URL: https://issues.apache.org/jira/browse/YARN-9414
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: YARN Appstore.pdf, YARN-Application-Catalog.pdf
>
>
> YARN native services provides web services API to improve usability of 
> application deployment on Hadoop using collection of docker images.  It would 
> be nice to have an application catalog system which provides an editorial and 
> search interface for YARN applications.  This improves usability of YARN for 
> manage the life cycle of applications.  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-10078) YARN Application Catalog enhancement

2020-01-08 Thread Eric Yang (Jira)
Eric Yang created YARN-10078:


 Summary: YARN Application Catalog enhancement
 Key: YARN-10078
 URL: https://issues.apache.org/jira/browse/YARN-10078
 Project: Hadoop YARN
  Issue Type: Improvement
Reporter: Eric Yang


This story continues the development work started in YARN-9414.  Some 
enhancement for YARN application catalog can make the application more user 
friendly.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9137) Get the IP and port of the docker container and display it on WEB UI2

2020-01-08 Thread Eric Yang (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-9137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated YARN-9137:

Parent: (was: YARN-8472)
Issue Type: Wish  (was: Sub-task)

> Get the IP and port of the docker container and display it on WEB UI2
> -
>
> Key: YARN-9137
> URL: https://issues.apache.org/jira/browse/YARN-9137
> Project: Hadoop YARN
>  Issue Type: Wish
>Reporter: Xun Liu
>Priority: Major
>
> 1) When using a container network such as Calico, the IP of the container is 
> not the IP of the host, but is allocated in the private network, and the 
> different containers can be directly connected.
>  Exposing the services in the container through a reverse proxy such as Ngxin 
> makes it easy for users to view the IP and port on WEB UI2 to use the 
> services in the container, such as Tomcat, TensorBoard, and so on.
>  2) When not using a container network such as Calico, the container also has 
> its own container port.
> So you need to display the IP and port of the docker container on WEB UI2.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Resolved] (YARN-7994) Add support for network-alias in docker run for user defined networks

2020-01-08 Thread Eric Yang (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-7994?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang resolved YARN-7994.
-
Resolution: Later

This feature doesn't seem to be making progress in container phase 2.  Mark it 
for later.

> Add support for network-alias in docker run for user defined networks 
> --
>
> Key: YARN-7994
> URL: https://issues.apache.org/jira/browse/YARN-7994
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
>Priority: Major
>  Labels: Docker
>
> Docker Embedded DNS supports DNS resolution for containers by one or more of 
> its configured {{--network-alias}} within a user-defined network. 
> DockerRunCommand should support this option for DNS resolution to work 
> through docker embedded DNS 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Resolved] (YARN-8744) In some cases docker kill is used to stop non-privileged containers instead of sending the signal directly

2020-01-08 Thread Eric Yang (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-8744?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang resolved YARN-8744.
-
Resolution: Incomplete

Nice to have, but inconsequential detail.  There is no plan to fix this. 

> In some cases docker kill is used to stop non-privileged containers instead 
> of sending the signal directly
> --
>
> Key: YARN-8744
> URL: https://issues.apache.org/jira/browse/YARN-8744
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Chandni Singh
>Assignee: Chandni Singh
>Priority: Major
>  Labels: docker
>
> With YARN-8706, stopping docker containers was achieved by 
> 1. parsing the user specified {{STOPSIGNAL}} via docker inspect
> 2. executing {{docker kill --signal=}}
> Quoting [~ebadger]
> {quote}
> Additionally, for non-privileged containers, we don't need to call docker 
> kill. Instead, we can follow the code in handleContainerKill() and send the 
> signal directly. I think this code could probably be combined, since at this 
> point handleContainerKill() and handleContainerStop() will be doing the same 
> thing. The only difference is that the STOPSIGNAL will be used for the stop.
> {quote}
> To achieve the above, we need native code that accepts the name of the signal 
> rather than the value (number) of the signal. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8472) YARN Container Phase 2

2020-01-08 Thread Eric Yang (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-8472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17010873#comment-17010873
 ] 

Eric Yang commented on YARN-8472:
-

[~brahmareddy] Thank you for the heads up.  We can close this umbrella for 
3.3.0.  I think the only outstanding issue is YARN-9292, which is good to be 
part of 3.3.0, but not absolutely required.  I ask [~billie] to review, if we 
can make the window.

> YARN Container Phase 2
> --
>
> Key: YARN-8472
> URL: https://issues.apache.org/jira/browse/YARN-8472
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
>
> In YARN-3611, we have implemented basic Docker container support for YARN.  
> This story is the next phase to improve container usability.
> Several area for improvements are:
>  # Software defined network support
>  # Interactive shell to container
>  # User management sss/nscd integration
>  # Runc/containerd support
>  # Metrics/Logs integration with Timeline service v2 
>  # Docker container profiles
>  # Docker cgroup management



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9292) Implement logic to keep docker image consistent in application that uses :latest tag

2020-01-08 Thread Eric Yang (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-9292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated YARN-9292:

Target Version/s: 3.3.0

> Implement logic to keep docker image consistent in application that uses 
> :latest tag
> 
>
> Key: YARN-9292
> URL: https://issues.apache.org/jira/browse/YARN-9292
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: YARN-9292.001.patch, YARN-9292.002.patch, 
> YARN-9292.003.patch, YARN-9292.004.patch, YARN-9292.005.patch, 
> YARN-9292.006.patch
>
>
> Docker image with latest tag can run in YARN cluster without any validation 
> in node managers. If a image with latest tag is changed during containers 
> launch. It might produce inconsistent results between nodes. This is surfaced 
> toward end of development for YARN-9184 to keep docker image consistent 
> within a job. One of the ideas to keep :latest tag consistent for a job, is 
> to use docker image command to figure out the image id and use image id to 
> propagate to rest of the container requests. There are some challenges to 
> overcome:
>  # The latest tag does not exist on the node where first container starts. 
> The first container will need to download the latest image, and find image 
> ID. This can introduce lag time for other containers to start.
>  # If image id is used to start other container, container-executor may have 
> problems to check if the image is coming from a trusted source. Both image 
> name and ID must be supply through .cmd file to container-executor. However, 
> hacker can supply incorrect image id and defeat container-executor security 
> checks.
> If we can over come those challenges, it maybe possible to keep docker image 
> consistent with one application.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10028) Integrate the new abstract log servlet to the JobHistory server

2020-01-08 Thread Adam Antal (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10028?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adam Antal updated YARN-10028:
--
Attachment: YARN-10028.001.patch

> Integrate the new abstract log servlet to the JobHistory server
> ---
>
> Key: YARN-10028
> URL: https://issues.apache.org/jira/browse/YARN-10028
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Adam Antal
>Assignee: Adam Antal
>Priority: Major
> Attachments: YARN-10028.001.patch
>
>
> Currently JHS has already incorporates a log servlet, but it in incapable of 
> serving REST calls. We can integrate the new common log servlet to the JHS in 
> order to have a REST interface.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8672) TestContainerManager#testLocalingResourceWhileContainerRunning occasionally times out

2020-01-08 Thread Jim Brennan (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-8672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17010871#comment-17010871
 ] 

Jim Brennan commented on YARN-8672:
---

[~ebadger] I have uploaded a patch for branch-3.1 as well.

 

> TestContainerManager#testLocalingResourceWhileContainerRunning occasionally 
> times out
> -
>
> Key: YARN-8672
> URL: https://issues.apache.org/jira/browse/YARN-8672
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.10.0, 3.2.0
>Reporter: Jason Darrell Lowe
>Assignee: Chandni Singh
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: YARN-8672-branch-2.10.001.patch, 
> YARN-8672-branch-2.10.002.patch, YARN-8672-branch-2.10.003.patch, 
> YARN-8672-branch-3.1.001.patch, YARN-8672-branch-3.2.001.patch, 
> YARN-8672.001.patch, YARN-8672.002.patch, YARN-8672.003.patch, 
> YARN-8672.004.patch, YARN-8672.005.patch, YARN-8672.006.patch, 
> YARN-8672.007.patch, YARN-8672.008.patch
>
>
> Precommit builds have been failing in 
> TestContainerManager#testLocalingResourceWhileContainerRunning.  I have been 
> able to reproduce the problem without any patch applied if I run the test 
> enough times.  It looks like something is removing container tokens from the 
> nmPrivate area just as a new localizer starts.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8672) TestContainerManager#testLocalingResourceWhileContainerRunning occasionally times out

2020-01-08 Thread Jim Brennan (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-8672?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Brennan updated YARN-8672:
--
Attachment: YARN-8672-branch-3.1.001.patch

> TestContainerManager#testLocalingResourceWhileContainerRunning occasionally 
> times out
> -
>
> Key: YARN-8672
> URL: https://issues.apache.org/jira/browse/YARN-8672
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.10.0, 3.2.0
>Reporter: Jason Darrell Lowe
>Assignee: Chandni Singh
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: YARN-8672-branch-2.10.001.patch, 
> YARN-8672-branch-2.10.002.patch, YARN-8672-branch-2.10.003.patch, 
> YARN-8672-branch-3.1.001.patch, YARN-8672-branch-3.2.001.patch, 
> YARN-8672.001.patch, YARN-8672.002.patch, YARN-8672.003.patch, 
> YARN-8672.004.patch, YARN-8672.005.patch, YARN-8672.006.patch, 
> YARN-8672.007.patch, YARN-8672.008.patch
>
>
> Precommit builds have been failing in 
> TestContainerManager#testLocalingResourceWhileContainerRunning.  I have been 
> able to reproduce the problem without any patch applied if I run the test 
> enough times.  It looks like something is removing container tokens from the 
> nmPrivate area just as a new localizer starts.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10072) TestCSAllocateCustomResource failures

2020-01-08 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17010867#comment-17010867
 ] 

Hudson commented on YARN-10072:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17833 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17833/])
YARN-10072: TestCSAllocateCustomResource failures. Contributed by Jim (ericp: 
rev 6899be5a1729e49cff45090acd2cf4f54aeac089)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCSAllocateCustomResource.java


> TestCSAllocateCustomResource failures
> -
>
> Key: YARN-10072
> URL: https://issues.apache.org/jira/browse/YARN-10072
> Project: Hadoop YARN
>  Issue Type: Test
>  Components: yarn
>Affects Versions: 2.10.0
>Reporter: Jim Brennan
>Assignee: Jim Brennan
>Priority: Major
>  Labels: YARN
> Attachments: YARN-10072.001.patch, YARN-10072.002.patch
>
>
> This test is failing for us consistently in our internal 2.10 based branch.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9292) Implement logic to keep docker image consistent in application that uses :latest tag

2020-01-08 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-9292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17010815#comment-17010815
 ] 

Hadoop QA commented on YARN-9292:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  8s{color} 
| {color:red} YARN-9292 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-9292 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12963465/YARN-9292.006.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/25349/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Implement logic to keep docker image consistent in application that uses 
> :latest tag
> 
>
> Key: YARN-9292
> URL: https://issues.apache.org/jira/browse/YARN-9292
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: YARN-9292.001.patch, YARN-9292.002.patch, 
> YARN-9292.003.patch, YARN-9292.004.patch, YARN-9292.005.patch, 
> YARN-9292.006.patch
>
>
> Docker image with latest tag can run in YARN cluster without any validation 
> in node managers. If a image with latest tag is changed during containers 
> launch. It might produce inconsistent results between nodes. This is surfaced 
> toward end of development for YARN-9184 to keep docker image consistent 
> within a job. One of the ideas to keep :latest tag consistent for a job, is 
> to use docker image command to figure out the image id and use image id to 
> propagate to rest of the container requests. There are some challenges to 
> overcome:
>  # The latest tag does not exist on the node where first container starts. 
> The first container will need to download the latest image, and find image 
> ID. This can introduce lag time for other containers to start.
>  # If image id is used to start other container, container-executor may have 
> problems to check if the image is coming from a trusted source. Both image 
> name and ID must be supply through .cmd file to container-executor. However, 
> hacker can supply incorrect image id and defeat container-executor security 
> checks.
> If we can over come those challenges, it maybe possible to keep docker image 
> consistent with one application.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9292) Implement logic to keep docker image consistent in application that uses :latest tag

2020-01-08 Thread Eric Yang (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-9292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17010811#comment-17010811
 ] 

Eric Yang commented on YARN-9292:
-

[~billie] Can you help with the review of this issue?  If I recall correctly 
container ID is used to determine the latest docker image tag used by the 
application.  Without container ID, it will not compute the latest image 
correctly for the given application.  It would be nice to have this issue 
closed for Hadoop 3.3.0 release.  Thanks

> Implement logic to keep docker image consistent in application that uses 
> :latest tag
> 
>
> Key: YARN-9292
> URL: https://issues.apache.org/jira/browse/YARN-9292
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: YARN-9292.001.patch, YARN-9292.002.patch, 
> YARN-9292.003.patch, YARN-9292.004.patch, YARN-9292.005.patch, 
> YARN-9292.006.patch
>
>
> Docker image with latest tag can run in YARN cluster without any validation 
> in node managers. If a image with latest tag is changed during containers 
> launch. It might produce inconsistent results between nodes. This is surfaced 
> toward end of development for YARN-9184 to keep docker image consistent 
> within a job. One of the ideas to keep :latest tag consistent for a job, is 
> to use docker image command to figure out the image id and use image id to 
> propagate to rest of the container requests. There are some challenges to 
> overcome:
>  # The latest tag does not exist on the node where first container starts. 
> The first container will need to download the latest image, and find image 
> ID. This can introduce lag time for other containers to start.
>  # If image id is used to start other container, container-executor may have 
> problems to check if the image is coming from a trusted source. Both image 
> name and ID must be supply through .cmd file to container-executor. However, 
> hacker can supply incorrect image id and defeat container-executor security 
> checks.
> If we can over come those challenges, it maybe possible to keep docker image 
> consistent with one application.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9052) Replace all MockRM submit method definitions with a builder

2020-01-08 Thread Ahmed Hussein (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-9052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17010779#comment-17010779
 ] 

Ahmed Hussein commented on YARN-9052:
-

{quote}Yes, these are concerns of mine as well. Whenever very invasive changes 
like these are made, we have to balance any possible benefits against the very 
real added effort on backporting and upmerging.{quote}

[~epayne], I definitely agree with you about considering a balance between the 
value added by code cleanup. However, the frequency of code cleanup tickets on 
Jira increased recently. The code cleanup Jiras even span consecutive numbers 
which is really alarming.
See a sample of the Jiras below. 

Also, similar to bug-fixes and features, patches related to code refactoring 
should be thoroughly tested which does not seem the case in the history of this 
very Jira. 

* [YARN-10005: Code improvements in 
MutableCSConfigurationProvider|https://issues.apache.org/jira/browse/YARN-10005]
* [YARN-10004: Javadoc of YarnConfigurationStore#initialize is not 
straightforward|https://issues.apache.org/jira/browse/YARN-10004]
* [YARN-10002: Code cleanup and improvements 
ConfigurationStoreBaseTest|https://issues.apache.org/jira/browse/YARN-10002]
* [YARN-10001: Add explanation of unimplemented methods in 
InMemoryConfigurationStore|https://issues.apache.org/jira/browse/YARN-10001]
* [YARN-1: Code cleanup in 
FSSchedulerConfigurationStore|https://issues.apache.org/jira/browse/YARN-1]
* [YARN-: TestFSSchedulerConfigurationStore: Extend from 
ConfigurationStoreBaseTest, general code 
cleanup|https://issues.apache.org/jira/browse/YARN-]
* [YARN-9998: Code cleanup in 
LeveldbConfigurationStore|https://issues.apache.org/jira/browse/YARN-9998]
* [YARN-9997: Code cleanup in 
ZKConfigurationStore|https://issues.apache.org/jira/browse/YARN-9997]
* [YARN-9996: Code cleanup in 
QueueAdminConfigurationMutationACLPolicy|https://issues.apache.org/jira/browse/YARN-9996]
* [YARN-9995: Code cleanup in 
TestSchedConfCLI|https://issues.apache.org/jira/browse/YARN-9995]
* [YARN-9989: Typo in CapacityScheduler documentation: Runtime 
Configuration|https://issues.apache.org/jira/browse/YARN-9989]
* [YARN-9680: Code cleanup in ResourcePluginManager init 
methods|https://issues.apache.org/jira/browse/YARN-9680]
* [YARN-9679: Regular code cleanup in 
TestResourcePluginManager|https://issues.apache.org/jira/browse/YARN-9679]

> Replace all MockRM submit method definitions with a builder
> ---
>
> Key: YARN-9052
> URL: https://issues.apache.org/jira/browse/YARN-9052
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Szilard Nemeth
>Assignee: Szilard Nemeth
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: 
> YARN-9052-004withlogs-patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt,
>  YARN-9052-testlogs003-justfailed.txt, 
> YARN-9052-testlogs003-patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt,
>  YARN-9052-testlogs004-justfailed.txt, YARN-9052.001.patch, 
> YARN-9052.002.patch, YARN-9052.003.patch, YARN-9052.004.patch, 
> YARN-9052.004.withlogs.patch, YARN-9052.005.patch, YARN-9052.006.patch, 
> YARN-9052.007.patch, YARN-9052.008.patch, YARN-9052.009.patch, 
> YARN-9052.009.patch, YARN-9052.testlogs.002.patch, 
> YARN-9052.testlogs.002.patch, YARN-9052.testlogs.003.patch, 
> YARN-9052.testlogs.patch
>
>
> MockRM has 31 definitions of submitApp, most of them having more than 
> acceptable number of parameters, ranging from 2 to even 22 parameters, which 
> makes the code completely unreadable.
> On top of unreadability, it's very hard to follow what RmApp will be produced 
> for tests as they often pass a lot of empty / null values as parameters.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-9698) [Umbrella] Tools to help migration from Fair Scheduler to Capacity Scheduler

2020-01-08 Thread Peter Bacsko (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-9698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17010758#comment-17010758
 ] 

Peter Bacsko edited comment on YARN-9698 at 1/8/20 3:21 PM:


[~brahmareddy] the whole umbrella definitely won't be ready in the near future. 
However, we have some pending JIRAs which are possible candidates for 3.3.0:

YARN-9879 - multiple leaf queues w/ same name
YARN-9892 - support DRF on queue level
YARN-10067 - dry run for the FS-CS tool
YARN-9866 - CS queue mapping bugfix
YARN-9868 - CS queue mapping bugfix

Could you wait until these JIRAs are committed to trunk? What's the timeline 
for 3.3.0?


was (Author: pbacsko):
[~brahmareddy] the whole umbrella will definitely won't be ready in the near 
future. However, we have some pending JIRAs which are possible candidates for 
3.3.0:

YARN-9879 - multiple leaf queues w/ same name
YARN-9892 - support DRF on queue level
YARN-10067 - dry run for the FS-CS tool
YARN-9866 - CS queue mapping bugfix
YARN-9868 - CS queue mapping bugfix

Could you wait until these JIRAs are committed to trunk? What's the timeline 
for 3.3.0?

> [Umbrella] Tools to help migration from Fair Scheduler to Capacity Scheduler
> 
>
> Key: YARN-9698
> URL: https://issues.apache.org/jira/browse/YARN-9698
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacity scheduler
>Reporter: Weiwei Yang
>Priority: Major
>  Labels: fs2cs
> Attachments: FS-CS Migration.pdf
>
>
> We see some users want to migrate from Fair Scheduler to Capacity Scheduler, 
> this Jira is created as an umbrella to track all related efforts for the 
> migration, the scope contains
>  * Bug fixes
>  * Add missing features
>  * Migration tools that help to generate CS configs based on FS, validate 
> configs etc
>  * Documents
> this is part of CS component, the purpose is to make the migration process 
> smooth.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-9698) [Umbrella] Tools to help migration from Fair Scheduler to Capacity Scheduler

2020-01-08 Thread Peter Bacsko (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-9698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17010758#comment-17010758
 ] 

Peter Bacsko edited comment on YARN-9698 at 1/8/20 3:14 PM:


[~brahmareddy] the whole umbrella will definitely won't be ready in the near 
future. However, we have some pending JIRAs which are possible candidates for 
3.3.0:

YARN-9879 - multiple leaf queues w/ same name
YARN-9892 - support DRF on queue level
YARN-10067 - dry run for the FS-CS tool
YARN-9866 - CS queue mapping bugfix
YARN-9868 - CS queue mapping bugfix

Could you wait until these JIRAs are committed to trunk? What's the timeline 
for 3.3.0?


was (Author: pbacsko):
[~brahmareddy] the whole umbrella will definitely won't be ready in the near 
future. However, we have pending JIRAs which are possible candidates for 3.3.0:

YARN-9879 - multiple leaf queues w/ same name
YARN-9892 - support DRF on queue level
YARN-10067 - dry run for the FS-CS tool
YARN-9866 - CS queue mapping bugfix
YARN-9868 - CS queue mapping bugfix

Could you wait until these JIRAs are committed to trunk? What's the timeline 
for 3.3.0?

> [Umbrella] Tools to help migration from Fair Scheduler to Capacity Scheduler
> 
>
> Key: YARN-9698
> URL: https://issues.apache.org/jira/browse/YARN-9698
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacity scheduler
>Reporter: Weiwei Yang
>Priority: Major
>  Labels: fs2cs
> Attachments: FS-CS Migration.pdf
>
>
> We see some users want to migrate from Fair Scheduler to Capacity Scheduler, 
> this Jira is created as an umbrella to track all related efforts for the 
> migration, the scope contains
>  * Bug fixes
>  * Add missing features
>  * Migration tools that help to generate CS configs based on FS, validate 
> configs etc
>  * Documents
> this is part of CS component, the purpose is to make the migration process 
> smooth.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9698) [Umbrella] Tools to help migration from Fair Scheduler to Capacity Scheduler

2020-01-08 Thread Peter Bacsko (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-9698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17010758#comment-17010758
 ] 

Peter Bacsko commented on YARN-9698:


[~brahmareddy] the whole umbrella will definitely won't be ready in the near 
future. However, we have pending JIRAs which are possible candidates for 3.3.0:

YARN-9879 - multiple leaf queues w/ same name
YARN-9892 - support DRF on queue level
YARN-10067 - dry run for the FS-CS tool
YARN-9866 - CS queue mapping bugfix
YARN-9868 - CS queue mapping bugfix

Could you wait until these JIRAs are committed to trunk? What's the timeline 
for 3.3.0?

> [Umbrella] Tools to help migration from Fair Scheduler to Capacity Scheduler
> 
>
> Key: YARN-9698
> URL: https://issues.apache.org/jira/browse/YARN-9698
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacity scheduler
>Reporter: Weiwei Yang
>Priority: Major
>  Labels: fs2cs
> Attachments: FS-CS Migration.pdf
>
>
> We see some users want to migrate from Fair Scheduler to Capacity Scheduler, 
> this Jira is created as an umbrella to track all related efforts for the 
> migration, the scope contains
>  * Bug fixes
>  * Add missing features
>  * Migration tools that help to generate CS configs based on FS, validate 
> configs etc
>  * Documents
> this is part of CS component, the purpose is to make the migration process 
> smooth.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10067) Add dry-run feature to FS-CS converter tool

2020-01-08 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17010711#comment-17010711
 ] 

Hadoop QA commented on YARN-10067:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
50s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m  4s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 43s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 85m 
34s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}143m 51s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.5 Server=19.03.5 Image:yetus/hadoop:c44943d1fc3 |
| JIRA Issue | YARN-10067 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12990190/YARN-10067-004.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 33eeac236b8e 4.15.0-66-generic #75-Ubuntu SMP Tue Oct 1 
05:24:09 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 17aa8f6 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_232 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/25348/testReport/ |
| Max. process+thread count | 839 (vs. ulimit of 5500) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/25348/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Add dry-run feature to FS-CS converter tool
> 

[jira] [Commented] (YARN-9052) Replace all MockRM submit method definitions with a builder

2020-01-08 Thread Eric Payne (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-9052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17010710#comment-17010710
 ] 

Eric Payne commented on YARN-9052:
--

{quote}I am a little bit concerned with Jiras that solely opened for the 
purpose of refactoring and code readability; especially when they have no 
impact on bug fixes fix/performance.
{quote}
I somewhat disagree with this point. I do feel that there is a place for code 
cleanup JIRAs. There is a lot of tech-debt in Hadoop that would be good to 
clean up. This JIRA, for example, addressed a real usability problem with the 
old \{{MockRM#submitApp}} method.
{quote}3. Any developer who has a pending patch that includes submitApp() in 
new tests cases (i.e., MAPREDUCE-7169) will have to go through his changes one 
more time to upload a new patch.
{quote}
{quote}4. This patch creates many conflicts with other branches and complicates 
merges going forward. This is a significant amount of man hours to handle the 
new patch.
{quote}
Yes, these are concerns of mine as well. Whenever very invasive changes like 
these are made, we have to balance any possible benefits against the very real 
added effort on backporting and upmerging.

> Replace all MockRM submit method definitions with a builder
> ---
>
> Key: YARN-9052
> URL: https://issues.apache.org/jira/browse/YARN-9052
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Szilard Nemeth
>Assignee: Szilard Nemeth
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: 
> YARN-9052-004withlogs-patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt,
>  YARN-9052-testlogs003-justfailed.txt, 
> YARN-9052-testlogs003-patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt,
>  YARN-9052-testlogs004-justfailed.txt, YARN-9052.001.patch, 
> YARN-9052.002.patch, YARN-9052.003.patch, YARN-9052.004.patch, 
> YARN-9052.004.withlogs.patch, YARN-9052.005.patch, YARN-9052.006.patch, 
> YARN-9052.007.patch, YARN-9052.008.patch, YARN-9052.009.patch, 
> YARN-9052.009.patch, YARN-9052.testlogs.002.patch, 
> YARN-9052.testlogs.002.patch, YARN-9052.testlogs.003.patch, 
> YARN-9052.testlogs.patch
>
>
> MockRM has 31 definitions of submitApp, most of them having more than 
> acceptable number of parameters, ranging from 2 to even 22 parameters, which 
> makes the code completely unreadable.
> On top of unreadability, it's very hard to follow what RmApp will be produced 
> for tests as they often pass a lot of empty / null values as parameters.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-10077) Region in ats-hbase table 'prod.timelineservice.entity' failing to split

2020-01-08 Thread Prabhu Joseph (Jira)
Prabhu Joseph created YARN-10077:


 Summary: Region in ats-hbase table 'prod.timelineservice.entity' 
failing to split
 Key: YARN-10077
 URL: https://issues.apache.org/jira/browse/YARN-10077
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: ATSv2
Affects Versions: 3.3.0
Reporter: Prabhu Joseph


Entity Table grows too large very quickly and the table is failing to split 
when most of the entity rows belongs to an user.
 # Need to set optimal TTL value for info and config column family
 # Need to increase the prefix length for KeyPrefixRegionSplitPolicy.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10067) Add dry-run feature to FS-CS converter tool

2020-01-08 Thread Peter Bacsko (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17010644#comment-17010644
 ] 

Peter Bacsko commented on YARN-10067:
-

Thanks [~snemeth].

I addressed all comments except #5 which to me seems like a bit of an 
over-engineering (but if you show me an example, I can be convinced). 

> Add dry-run feature to FS-CS converter tool
> ---
>
> Key: YARN-10067
> URL: https://issues.apache.org/jira/browse/YARN-10067
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Peter Bacsko
>Assignee: Peter Bacsko
>Priority: Major
> Attachments: YARN-10067-001.patch, YARN-10067-002.patch, 
> YARN-10067-003.patch, YARN-10067-004.patch
>
>
> Add a "d" / "-dry-run" switch to the tool. The purpose of this would be to 
> inform the user whether a conversion is possible and if it is, are there any 
> warnings.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10067) Add dry-run feature to FS-CS converter tool

2020-01-08 Thread Peter Bacsko (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Bacsko updated YARN-10067:

Attachment: YARN-10067-004.patch

> Add dry-run feature to FS-CS converter tool
> ---
>
> Key: YARN-10067
> URL: https://issues.apache.org/jira/browse/YARN-10067
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Peter Bacsko
>Assignee: Peter Bacsko
>Priority: Major
> Attachments: YARN-10067-001.patch, YARN-10067-002.patch, 
> YARN-10067-003.patch, YARN-10067-004.patch
>
>
> Add a "d" / "-dry-run" switch to the tool. The purpose of this would be to 
> inform the user whether a conversion is possible and if it is, are there any 
> warnings.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10071) Sync Mockito version with other modules

2020-01-08 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17010604#comment-17010604
 ] 

Hadoop QA commented on YARN-10071:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m  
2s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
47s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 22m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
67m 17s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
2s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
27s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 19m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 19m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
6s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 25s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
23s{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
28s{color} | {color:green} hadoop-yarn-applications-mawo-core in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  6m 
27s{color} | {color:green} hadoop-dynamometer-infra in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
40s{color} | {color:green} hadoop-dynamometer-blockgen in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
42s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}119m 51s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.5 Server=19.03.5 Image:yetus/hadoop:c44943d1fc3 |
| JIRA Issue | YARN-10071 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12990177/YARN-10071.001.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  xml  |
| uname | Linux ddd7fbf32cdc 4.15.0-66-generic #75-Ubuntu SMP Tue Oct 1 
05:24:09 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 7030722 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_232 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/25347/testReport/ |
| Max. process+thread count | 929 

[jira] [Commented] (YARN-10063) Usage output of container-executor binary needs to include --http/--https argument

2020-01-08 Thread Peter Bacsko (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10063?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17010577#comment-17010577
 ] 

Peter Bacsko commented on YARN-10063:
-

Good point [~wilfreds] - I agree with the above.

> Usage output of container-executor binary needs to include --http/--https 
> argument
> --
>
> Key: YARN-10063
> URL: https://issues.apache.org/jira/browse/YARN-10063
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Siddharth Ahuja
>Assignee: Siddharth Ahuja
>Priority: Minor
> Attachments: YARN-10063.001.patch, YARN-10063.002.patch
>
>
> YARN-8448/YARN-6586 seems to have introduced a new option - "\--http" 
> (default) and "\--https" that is possible to be passed in to the 
> container-executor binary, see :
> https://github.com/apache/hadoop/blob/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/main.c#L564
> and 
> https://github.com/apache/hadoop/blob/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/main.c#L521
> however, the usage output seems to have missed this:
> https://github.com/apache/hadoop/blob/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/main.c#L74
> Raising this jira to improve this.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10063) Usage output of container-executor binary needs to include --http/--https argument

2020-01-08 Thread Wilfred Spiegelenburg (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10063?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17010548#comment-17010548
 ] 

Wilfred Spiegelenburg commented on YARN-10063:
--

Thank you review [~pbacsko] and for the update [~sahuja]

We need to add the usage correctly to the output. What I see at the moment does 
not look correct.The output indicates that we have {{--http | --https}} It does 
not show that the two parameters following the "choice" are only supported with 
the https option.

Not sure how we can show that better, we might need to go to partially back to 
the way [~sahuja] did show it in the first version and use the same construct 
as used for the {{command and command-args}} to provide the detail needed, so 
something like:
{quote}where command and command-args:
 initialize container: 0 appid tokens nm-local-dirs nm-log-dirs cmd app...
 launch container: 1 appid containerid workdir container-script tokens 
*http-option* pidfile nm-local-dirs nm-log-dirs resources 
optional-tc-command-file
 [DISABLED] launch docker container: 4 appid containerid 
 ...
 where http-option is one of:
 --http
 --https keystorepath truststorepath
{quote}

A bit difficult to show in jira but I think you get the gist.

> Usage output of container-executor binary needs to include --http/--https 
> argument
> --
>
> Key: YARN-10063
> URL: https://issues.apache.org/jira/browse/YARN-10063
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Siddharth Ahuja
>Assignee: Siddharth Ahuja
>Priority: Minor
> Attachments: YARN-10063.001.patch, YARN-10063.002.patch
>
>
> YARN-8448/YARN-6586 seems to have introduced a new option - "\--http" 
> (default) and "\--https" that is possible to be passed in to the 
> container-executor binary, see :
> https://github.com/apache/hadoop/blob/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/main.c#L564
> and 
> https://github.com/apache/hadoop/blob/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/main.c#L521
> however, the usage output seems to have missed this:
> https://github.com/apache/hadoop/blob/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/main.c#L74
> Raising this jira to improve this.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10071) Sync Mockito version with other modules

2020-01-08 Thread Adam Antal (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10071?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adam Antal updated YARN-10071:
--
Attachment: YARN-10071.001.patch

> Sync Mockito version with other modules
> ---
>
> Key: YARN-10071
> URL: https://issues.apache.org/jira/browse/YARN-10071
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: build, test
>Reporter: Akira Ajisaka
>Assignee: Adam Antal
>Priority: Major
> Attachments: YARN-10071.001.patch
>
>
> YARN-8551 introduced Mockito 1.x dependency, update.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10075) historyContext doesn't need to be a class attribute inside JobHistoryServer

2020-01-08 Thread Siddharth Ahuja (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10075?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddharth Ahuja updated YARN-10075:
---
Description: 
"historyContext" class attribute at 
https://github.com/apache/hadoop/blob/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/JobHistoryServer.java#L67
 is assigned a cast of another class attribute - "jobHistoryService" - 
https://github.com/apache/hadoop/blob/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/JobHistoryServer.java#L131,
 however it does not need to be stored separately because it is only ever used 
once in the clas, and that too as an argument while instantiating the 
HistoryClientService class at 
https://github.com/apache/hadoop/blob/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/JobHistoryServer.java#L155.

Therefore, we could just delete the lines at 
https://github.com/apache/hadoop/blob/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/JobHistoryServer.java#L67
 and 
https://github.com/apache/hadoop/blob/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/JobHistoryServer.java#L131
 completely and instantiate the HistoryClientService as follows:

{code}
  @VisibleForTesting
  protected HistoryClientService createHistoryClientService() {
return new HistoryClientService((HistoryContext)jobHistoryService, 
this.jhsDTSecretManager);
  }
{code}

  was:
"historyContext" class attribute at 
https://github.com/apache/hadoop/blob/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/JobHistoryServer.java#L67
 is assigned a cast of another class attribute - "jobHistoryService" - 
https://github.com/apache/hadoop/blob/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/JobHistoryServer.java#L131,
 however it does not need to be stored separately because it is only ever used 
once in the clas, and that too as an argument while instantiating the 
HistoryClientService class at 
https://github.com/apache/hadoop/blob/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/JobHistoryServer.java#L155.

Therefore, we could just delete the line at 
https://github.com/apache/hadoop/blob/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/JobHistoryServer.java#L131
 completely and instantiate the HistoryClientService as follows:

{code}
  @VisibleForTesting
  protected HistoryClientService createHistoryClientService() {
return new HistoryClientService((HistoryContext)jobHistoryService, 
this.jhsDTSecretManager);
  }
{code}


> historyContext doesn't need to be a class attribute inside JobHistoryServer
> ---
>
> Key: YARN-10075
> URL: https://issues.apache.org/jira/browse/YARN-10075
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Siddharth Ahuja
>Assignee: Siddharth Ahuja
>Priority: Minor
>
> "historyContext" class attribute at 
> https://github.com/apache/hadoop/blob/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/JobHistoryServer.java#L67
>  is assigned a cast of another class attribute - "jobHistoryService" - 
> https://github.com/apache/hadoop/blob/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/JobHistoryServer.java#L131,
>  however it does not need to be stored separately because it is only ever 
> used once in the clas, and that too as an argument while instantiating the 
> HistoryClientService class at 
> https://github.com/apache/hadoop/blob/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/JobHistoryServer.java#L155.
> Therefore, we could just delete the lines at 
> https://github.com/apache/hadoop/blob/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/JobHistoryServer.java#L67
>  and 
> 

[jira] [Updated] (YARN-10075) historyContext doesn't need to be a class attribute inside JobHistoryServer

2020-01-08 Thread Siddharth Ahuja (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10075?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddharth Ahuja updated YARN-10075:
---
Component/s: yarn

> historyContext doesn't need to be a class attribute inside JobHistoryServer
> ---
>
> Key: YARN-10075
> URL: https://issues.apache.org/jira/browse/YARN-10075
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: yarn
>Reporter: Siddharth Ahuja
>Assignee: Siddharth Ahuja
>Priority: Minor
>
> "historyContext" class attribute at 
> https://github.com/apache/hadoop/blob/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/JobHistoryServer.java#L67
>  is assigned a cast of another class attribute - "jobHistoryService" - 
> https://github.com/apache/hadoop/blob/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/JobHistoryServer.java#L131,
>  however it does not need to be stored separately because it is only ever 
> used once in the clas, and that too as an argument while instantiating the 
> HistoryClientService class at 
> https://github.com/apache/hadoop/blob/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/JobHistoryServer.java#L155.
> Therefore, we could just delete the lines at 
> https://github.com/apache/hadoop/blob/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/JobHistoryServer.java#L67
>  and 
> https://github.com/apache/hadoop/blob/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/JobHistoryServer.java#L131
>  completely and instantiate the HistoryClientService as follows:
> {code}
>   @VisibleForTesting
>   protected HistoryClientService createHistoryClientService() {
> return new HistoryClientService((HistoryContext)jobHistoryService, 
> this.jhsDTSecretManager);
>   }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-10076) Add ability to provide required app information manually for log servlet

2020-01-08 Thread Adam Antal (Jira)
Adam Antal created YARN-10076:
-

 Summary: Add ability to provide required app information manually 
for log servlet
 Key: YARN-10076
 URL: https://issues.apache.org/jira/browse/YARN-10076
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: yarn
Affects Versions: 3.3.0
Reporter: Adam Antal
Assignee: Adam Antal


The Log servlet received its inputs from either the RM (in case of JHS) or the 
timeline (in case of AHS, ATS). If RM has fairly low max-app in state store 
config applied, then we are not going to get required information for the log 
servlet, though the aggregated logs are still available in the path. We should 
provide a "dummy" implementation of the {{AppInfoProvider}}.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-10075) historyContext doesn't need to be a class attribute inside JobHistoryServer

2020-01-08 Thread Siddharth Ahuja (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10075?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddharth Ahuja reassigned YARN-10075:
--

Assignee: Siddharth Ahuja

> historyContext doesn't need to be a class attribute inside JobHistoryServer
> ---
>
> Key: YARN-10075
> URL: https://issues.apache.org/jira/browse/YARN-10075
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Siddharth Ahuja
>Assignee: Siddharth Ahuja
>Priority: Minor
>
> "historyContext" class attribute at 
> https://github.com/apache/hadoop/blob/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/JobHistoryServer.java#L67
>  is assigned a cast of another class attribute - "jobHistoryService" - 
> https://github.com/apache/hadoop/blob/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/JobHistoryServer.java#L131,
>  however it does not need to be stored separately because it is only ever 
> used once in the clas, and that too as an argument while instantiating the 
> HistoryClientService class at 
> https://github.com/apache/hadoop/blob/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/JobHistoryServer.java#L155.
> Therefore, we could just delete the line at 
> https://github.com/apache/hadoop/blob/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/JobHistoryServer.java#L131
>  completely and instantiate the HistoryClientService as follows:
> {code}
>   @VisibleForTesting
>   protected HistoryClientService createHistoryClientService() {
> return new HistoryClientService((HistoryContext)jobHistoryService, 
> this.jhsDTSecretManager);
>   }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-10075) historyContext doesn't need to be a class attribute inside JobHistoryServer

2020-01-08 Thread Siddharth Ahuja (Jira)
Siddharth Ahuja created YARN-10075:
--

 Summary: historyContext doesn't need to be a class attribute 
inside JobHistoryServer
 Key: YARN-10075
 URL: https://issues.apache.org/jira/browse/YARN-10075
 Project: Hadoop YARN
  Issue Type: Improvement
Reporter: Siddharth Ahuja


"historyContext" class attribute at 
https://github.com/apache/hadoop/blob/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/JobHistoryServer.java#L67
 is assigned a cast of another class attribute - "jobHistoryService" - 
https://github.com/apache/hadoop/blob/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/JobHistoryServer.java#L131,
 however it does not need to be stored separately because it is only ever used 
once in the clas, and that too as an argument while instantiating the 
HistoryClientService class at 
https://github.com/apache/hadoop/blob/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/JobHistoryServer.java#L155.

Therefore, we could just delete the line at 
https://github.com/apache/hadoop/blob/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/JobHistoryServer.java#L131
 completely and instantiate the HistoryClientService as follows:

{code}
  @VisibleForTesting
  protected HistoryClientService createHistoryClientService() {
return new HistoryClientService((HistoryContext)jobHistoryService, 
this.jhsDTSecretManager);
  }
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10071) Sync Mockito version with other modules

2020-01-08 Thread Adam Antal (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17010501#comment-17010501
 ] 

Adam Antal commented on YARN-10071:
---

As far as I see mockito-all is used in Dynamometer, Yarn app catalog. 

Let's use the 1.10.19 version of Dynamo, leave 1.9 for Yarn app catalog and 
remove the artifact from MaWo.

> Sync Mockito version with other modules
> ---
>
> Key: YARN-10071
> URL: https://issues.apache.org/jira/browse/YARN-10071
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: build, test
>Reporter: Akira Ajisaka
>Assignee: Adam Antal
>Priority: Major
>
> YARN-8551 introduced Mockito 1.x dependency, update.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-10071) Sync Mockito version with other modules

2020-01-08 Thread Adam Antal (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10071?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adam Antal reassigned YARN-10071:
-

Assignee: Adam Antal

> Sync Mockito version with other modules
> ---
>
> Key: YARN-10071
> URL: https://issues.apache.org/jira/browse/YARN-10071
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: build, test
>Reporter: Akira Ajisaka
>Assignee: Adam Antal
>Priority: Major
>
> YARN-8551 introduced Mockito 1.x dependency, update.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-10074) Update netty to 4.1.42Final in yarn-csi

2020-01-08 Thread Wei-Chiu Chuang (Jira)
Wei-Chiu Chuang created YARN-10074:
--

 Summary: Update netty to 4.1.42Final in yarn-csi
 Key: YARN-10074
 URL: https://issues.apache.org/jira/browse/YARN-10074
 Project: Hadoop YARN
  Issue Type: Improvement
Reporter: Wei-Chiu Chuang


Looks like HADOOP-16643 is not complete. 

https://github.com/apache/hadoop/blob/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-csi/pom.xml#L32
yarn-csi depends on netty-all 4.1.27Final

[~leosun08] would you be interested in providing another patch to update it 
here?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Resolved] (YARN-8851) [Umbrella] A pluggable device plugin framework to ease vendor plugin development

2020-01-08 Thread Zhankun Tang (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-8851?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhankun Tang resolved YARN-8851.

Fix Version/s: 3.3.0
   Resolution: Fixed

> [Umbrella] A pluggable device plugin framework to ease vendor plugin 
> development
> 
>
> Key: YARN-8851
> URL: https://issues.apache.org/jira/browse/YARN-8851
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: yarn
>Reporter: Zhankun Tang
>Assignee: Zhankun Tang
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: YARN-8851-WIP2-trunk.001.patch, 
> YARN-8851-WIP3-trunk.001.patch, YARN-8851-WIP4-trunk.001.patch, 
> YARN-8851-WIP5-trunk.001.patch, YARN-8851-WIP6-trunk.001.patch, 
> YARN-8851-WIP7-trunk.001.patch, YARN-8851-WIP8-trunk.001.patch, 
> YARN-8851-WIP9-trunk.001.patch, YARN-8851-trunk.001.patch, 
> YARN-8851-trunk.002.patch, [YARN-8851] 
> YARN_New_Device_Plugin_Framework_Design_Proposal-3.pdf, [YARN-8851] 
> YARN_New_Device_Plugin_Framework_Design_Proposal-4.pdf, [YARN-8851] 
> YARN_New_Device_Plugin_Framework_Design_Proposal.pdf
>
>
> At present, we support GPU/FPGA device in YARN through a native, coupling 
> way. But it's difficult for a vendor to implement such a device plugin 
> because the developer needs much knowledge of YARN internals. And this brings 
> burden to the community to maintain both YARN core and vendor-specific code.
> Here we propose a new device plugin framework to ease vendor device plugin 
> development and provide a more flexible way to integrate with YARN NM.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8851) [Umbrella] A pluggable device plugin framework to ease vendor plugin development

2020-01-08 Thread Zhankun Tang (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-8851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17010470#comment-17010470
 ] 

Zhankun Tang commented on YARN-8851:


[~brahmareddy], thanks for planning the 3.3.0 release. Yeah. Let me close this 
Jira and move the remaining JIRAs out.

> [Umbrella] A pluggable device plugin framework to ease vendor plugin 
> development
> 
>
> Key: YARN-8851
> URL: https://issues.apache.org/jira/browse/YARN-8851
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: yarn
>Reporter: Zhankun Tang
>Assignee: Zhankun Tang
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: YARN-8851-WIP2-trunk.001.patch, 
> YARN-8851-WIP3-trunk.001.patch, YARN-8851-WIP4-trunk.001.patch, 
> YARN-8851-WIP5-trunk.001.patch, YARN-8851-WIP6-trunk.001.patch, 
> YARN-8851-WIP7-trunk.001.patch, YARN-8851-WIP8-trunk.001.patch, 
> YARN-8851-WIP9-trunk.001.patch, YARN-8851-trunk.001.patch, 
> YARN-8851-trunk.002.patch, [YARN-8851] 
> YARN_New_Device_Plugin_Framework_Design_Proposal-3.pdf, [YARN-8851] 
> YARN_New_Device_Plugin_Framework_Design_Proposal-4.pdf, [YARN-8851] 
> YARN_New_Device_Plugin_Framework_Design_Proposal.pdf
>
>
> At present, we support GPU/FPGA device in YARN through a native, coupling 
> way. But it's difficult for a vendor to implement such a device plugin 
> because the developer needs much knowledge of YARN internals. And this brings 
> burden to the community to maintain both YARN core and vendor-specific code.
> Here we propose a new device plugin framework to ease vendor device plugin 
> development and provide a more flexible way to integrate with YARN NM.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9879) Allow multiple leaf queues with the same name in CS

2020-01-08 Thread Peter Bacsko (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-9879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17010461#comment-17010461
 ] 

Peter Bacsko commented on YARN-9879:


I just mentioned the mapping because based on my (admittedly limited) 
knowledge, it's the heart of CS when it comes to managing the queues.

It's handled inside 
[https://github.com/apache/hadoop/blob/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacitySchedulerQueueManager.java].

So I was under the impression that this part needs to be changed, I can be 
wrong though.

> Allow multiple leaf queues with the same name in CS
> ---
>
> Key: YARN-9879
> URL: https://issues.apache.org/jira/browse/YARN-9879
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gergely Pollak
>Assignee: Gergely Pollak
>Priority: Major
> Attachments: DesignDoc_v1.pdf
>
>
> Currently the leaf queue's name must be unique regardless of its position in 
> the queue hierarchy. 
> Design doc and first proposal is being made, I'll attach it as soon as it's 
> done.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org