[jira] [Commented] (YARN-7223) Document GPU isolation feature

2018-02-20 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16371000#comment-16371000
 ] 

Sunil G commented on YARN-7223:
---

Looks fine, committing shortly if no objections.

> Document GPU isolation feature
> --
>
> Key: YARN-7223
> URL: https://issues.apache.org/jira/browse/YARN-7223
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>Priority: Blocker
> Attachments: YARN-7223.002.patch, YARN-7223.002.pdf, 
> YARN-7223.003.patch, YARN-7223.wip.001.patch, YARN-7223.wip.001.pdf
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7346) Fix compilation errors against hbase2 beta release

2018-02-20 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16370996#comment-16370996
 ] 

Rohith Sharma K S commented on YARN-7346:
-

Looks like compiling both the version fails because of Dependency convergence 
error for hadoop-yarn-server-timelineservice-hbase-common. We can't go ahead 
with option-2 because of Dependency convergence check.

> Fix compilation errors against hbase2 beta release
> --
>
> Key: YARN-7346
> URL: https://issues.apache.org/jira/browse/YARN-7346
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Ted Yu
>Assignee: Haibo Chen
>Priority: Major
> Attachments: YARN-7346.00.patch, YARN-7346.01.patch, 
> YARN-7346.02.patch, YARN-7346.03-incremental.patch, YARN-7346.03.patch, 
> YARN-7346.04.patch, YARN-7346.prelim1.patch, YARN-7346.prelim2.patch, 
> YARN-7581.prelim.patch
>
>
> When compiling hadoop-yarn-server-timelineservice-hbase against 2.0.0-alpha3, 
> I got the following errors:
> https://pastebin.com/Ms4jYEVB
> This issue is to fix the compilation errors.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7223) Document GPU isolation feature

2018-02-20 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16370965#comment-16370965
 ] 

genericqa commented on YARN-7223:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
25m 25s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 5 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch 1 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 49s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 37m 39s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7223 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12911338/YARN-7223.003.patch |
| Optional Tests |  asflicense  mvnsite  |
| uname | Linux 994b070bc747 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 121e1e1 |
| maven | version: Apache Maven 3.3.9 |
| whitespace | 
https://builds.apache.org/job/PreCommit-YARN-Build/19754/artifact/out/whitespace-eol.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-YARN-Build/19754/artifact/out/whitespace-tabs.txt
 |
| Max. process+thread count | 409 (vs. ulimit of 5500) |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/19754/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Document GPU isolation feature
> --
>
> Key: YARN-7223
> URL: https://issues.apache.org/jira/browse/YARN-7223
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>Priority: Blocker
> Attachments: YARN-7223.002.patch, YARN-7223.002.pdf, 
> YARN-7223.003.patch, YARN-7223.wip.001.patch, YARN-7223.wip.001.pdf
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5028) RMStateStore should trim down app state for completed applications

2018-02-20 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16370958#comment-16370958
 ] 

Rohith Sharma K S commented on YARN-5028:
-

latest patch reasonable to me

> RMStateStore should trim down app state for completed applications
> --
>
> Key: YARN-5028
> URL: https://issues.apache.org/jira/browse/YARN-5028
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Affects Versions: 2.8.0
>Reporter: Karthik Kambatla
>Assignee: Gergo Repas
>Priority: Major
> Attachments: YARN-5028.000.patch, YARN-5028.001.patch, 
> YARN-5028.002.patch, YARN-5028.003.patch, YARN-5028.004.patch, 
> YARN-5028.005.patch, YARN-5028.006.patch, YARN-5028.007.patch
>
>
> RMStateStore stores enough information to recover applications in case of a 
> restart. The store also retains this information for completed applications 
> to serve their status to REST, WebUI, Java and CLI clients. We don't need all 
> the information we store today to serve application status; for instance, we 
> don't need the {{ApplicationSubmissionContext}}. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7223) Document GPU isolation feature

2018-02-20 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-7223:
-
Attachment: YARN-7223.003.patch

> Document GPU isolation feature
> --
>
> Key: YARN-7223
> URL: https://issues.apache.org/jira/browse/YARN-7223
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>Priority: Blocker
> Attachments: YARN-7223.002.patch, YARN-7223.002.pdf, 
> YARN-7223.003.patch, YARN-7223.wip.001.patch, YARN-7223.wip.001.pdf
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7223) Document GPU isolation feature

2018-02-20 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16370934#comment-16370934
 ] 

Wangda Tan commented on YARN-7223:
--

Thanks [~sunilg], attached ver.3 patch, please review.

> Document GPU isolation feature
> --
>
> Key: YARN-7223
> URL: https://issues.apache.org/jira/browse/YARN-7223
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>Priority: Blocker
> Attachments: YARN-7223.002.patch, YARN-7223.002.pdf, 
> YARN-7223.003.patch, YARN-7223.wip.001.patch, YARN-7223.wip.001.pdf
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7952) Find a way to persist the log aggregation status

2018-02-20 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16370930#comment-16370930
 ] 

Wangda Tan commented on YARN-7952:
--

Thanks [~xgong], I prefer option #2.

> Find a way to persist the log aggregation status
> 
>
> Key: YARN-7952
> URL: https://issues.apache.org/jira/browse/YARN-7952
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Xuan Gong
>Assignee: Xuan Gong
>Priority: Major
>
> In MAPREDUCE-6415, we have created a CLI to har the aggregated logs, and In 
> YARN-4946: RM should write out Aggregated Log Completion file flag next to 
> logs, we have a discussion on how we can get the log aggregation status: make 
> a client call to RM or get it directly from the Distributed file system(HDFS).
> No matter which approach we would like to choose, we need to figure out a way 
> to persist the log aggregation status first. This ticket is used to track the 
> working progress for this purpose.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7949) ArtifactsId should not be a compulsory field for new service

2018-02-20 Thread Yesha Vora (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7949?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yesha Vora updated YARN-7949:
-
Attachment: YARN-7949.001.patch

> ArtifactsId should not be a compulsory field for new service
> 
>
> Key: YARN-7949
> URL: https://issues.apache.org/jira/browse/YARN-7949
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Affects Versions: 3.1.0
>Reporter: Yesha Vora
>Assignee: Yesha Vora
>Priority: Major
> Attachments: YARN-7949.001.patch
>
>
> 1) Click on New Service 
> 2) Create a component
> Create Component page has Artifacts Id as compulsory entry. Few yarn service 
> example such as sleeper.json does not need to provide artifacts id.
> {code:java|title=sleeper.json}
> {
>   "name": "sleeper-service",
>   "components" :
>   [
> {
>   "name": "sleeper",
>   "number_of_containers": 2,
>   "launch_command": "sleep 90",
>   "resource": {
> "cpus": 1,
> "memory": "256"
>   }
> }
>   ]
> }{code}
> Thus, artifactsId should not be compulsory field.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7403) [GQ] Compute global and local "IdealAllocation"

2018-02-20 Thread Konstantinos Karanasos (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16370912#comment-16370912
 ] 

Konstantinos Karanasos commented on YARN-7403:
--

Thanks for the patch, [~curino].

I would suggest to split it in two parts to make it easier to follow.

The first can be the data structures to model the federated cluster and the 
second the algos for rebalancing. Another way would be to add the data 
structures and the abstract rebalancer, and then add the various 
implementations as the second jira.

> [GQ] Compute global and local "IdealAllocation"
> ---
>
> Key: YARN-7403
> URL: https://issues.apache.org/jira/browse/YARN-7403
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: federation
>Reporter: Carlo Curino
>Assignee: Carlo Curino
>Priority: Major
> Attachments: YARN-7403.draft.patch, YARN-7403.draft2.patch, 
> YARN-7403.draft3.patch, YARN-7403.v1.patch, YARN-7403.v2.patch, 
> global-queues-preemption.PNG
>
>
> This JIRA tracks algorithmic effort to combine the local queue views of 
> capacity guarantee/use/demand and compute the global ideal allocation, and 
> the respective local allocations. This will inform the RMs in each 
> sub-clusters on how to allocate more containers to each queues (allowing for 
> temporary over/under allocations that are locally excessive, but globally 
> correct).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7952) Find a way to persist the log aggregation status

2018-02-20 Thread Xuan Gong (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16370900#comment-16370900
 ] 

Xuan Gong commented on YARN-7952:
-

[~rkanter] [~leftnoteasy] [~vinodkv]

What do you think?

> Find a way to persist the log aggregation status
> 
>
> Key: YARN-7952
> URL: https://issues.apache.org/jira/browse/YARN-7952
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Xuan Gong
>Assignee: Xuan Gong
>Priority: Major
>
> In MAPREDUCE-6415, we have created a CLI to har the aggregated logs, and In 
> YARN-4946: RM should write out Aggregated Log Completion file flag next to 
> logs, we have a discussion on how we can get the log aggregation status: make 
> a client call to RM or get it directly from the Distributed file system(HDFS).
> No matter which approach we would like to choose, we need to figure out a way 
> to persist the log aggregation status first. This ticket is used to track the 
> working progress for this purpose.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7952) Find a way to persist the log aggregation status

2018-02-20 Thread Xuan Gong (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16370898#comment-16370898
 ] 

Xuan Gong commented on YARN-7952:
-

Right now, the NM would send its own log aggregation status to RM periodically 
to RM. And RM would aggregate the status for each application, but it will not 
generate the final status until a client call(from web ui or cli) trigger it. 
But RM never persists the log aggregation status. So, when RM restarts/fails 
over, the log aggregation status will become “NOT_STARTED”. This is confusing, 
maybe we should change it to “NOT_AVAILABLE” (will create a separate ticket for 
this). Anyway, we need to persist the log aggregation status for the future use.

Option one:  the centralized approach.

Create a new service called LogAggregationTrackingService in RM which will 
track the log aggregation status for all applications. We can also introduce 
“EXPIRY_INTERVAL_MS”. The service can wake up periodically to check the log 
aggregation progress. This log aggregationTrackingService will be similar to a 
LivenessMonitor(such as AMLivenessMonitor). After EXPIRY_INTERVAL_MS, the 
service would trigger an update RMStateStore event to persist the final log 
aggregation status. So, we need to add one more RMStateStore event for every 
application. Also, when RM restart/fail-over happens between the 
EXPIRY_INTERVAL_MS, we still lose the log aggregation status.

Option two: only care about log aggregation status for the latest applications.

This approach will not persist the log aggregation status, so we will not need 
to trigger a new RMStateStore event. When NM sends the log aggregation status 
to RM, it will save a copy in its own memory(do we need to persist in NM state 
store ???). We also introduce “EXPIRY_INTERVAL_MS”. When RM restarts/fails 
over, NM would do re-register to RM. At this time, NM would send the previous 
copy of the log aggregation status to RM based on the configured 
“EXPIRY_INTERVAL_MS” (current_timestamp-last_updated_timestamp <= 
EXPIRY_INTERVAL_MS). So, the RM could re-generate the log aggregation status. 
Most of the changes will happen on NM side. 

Option three: Option one + Option two

> Find a way to persist the log aggregation status
> 
>
> Key: YARN-7952
> URL: https://issues.apache.org/jira/browse/YARN-7952
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Xuan Gong
>Assignee: Xuan Gong
>Priority: Major
>
> In MAPREDUCE-6415, we have created a CLI to har the aggregated logs, and In 
> YARN-4946: RM should write out Aggregated Log Completion file flag next to 
> logs, we have a discussion on how we can get the log aggregation status: make 
> a client call to RM or get it directly from the Distributed file system(HDFS).
> No matter which approach we would like to choose, we need to figure out a way 
> to persist the log aggregation status first. This ticket is used to track the 
> working progress for this purpose.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7952) Find a way to persist the log aggregation status

2018-02-20 Thread Xuan Gong (JIRA)
Xuan Gong created YARN-7952:
---

 Summary: Find a way to persist the log aggregation status
 Key: YARN-7952
 URL: https://issues.apache.org/jira/browse/YARN-7952
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Xuan Gong
Assignee: Xuan Gong


In MAPREDUCE-6415, we have created a CLI to har the aggregated logs, and In 
YARN-4946: RM should write out Aggregated Log Completion file flag next to 
logs, we have a discussion on how we can get the log aggregation status: make a 
client call to RM or get it directly from the Distributed file system(HDFS).
No matter which approach we would like to choose, we need to figure out a way 
to persist the log aggregation status first. This ticket is used to track the 
working progress for this purpose.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7951) Find a way to persist the log aggregation status

2018-02-20 Thread Xuan Gong (JIRA)
Xuan Gong created YARN-7951:
---

 Summary: Find a way to persist the log aggregation status
 Key: YARN-7951
 URL: https://issues.apache.org/jira/browse/YARN-7951
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Xuan Gong
Assignee: Xuan Gong


In MAPREDUCE-6415, we have created a CLI to har the aggregated logs, and In 
YARN-4946: RM should write out Aggregated Log Completion file flag next to 
logs, we have a discussion on how we can get the log aggregation status: make a 
client call to RM or get it directly from the Distributed file system(HDFS).
No matter which approach we would like to choose, we need to figure out a way 
to persist the log aggregation status first. This ticket is used to track the 
working progress for this purpose.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4488) CapacityScheduler: Compute per-container allocation latency and roll up to get per-application and per-queue

2018-02-20 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16370884#comment-16370884
 ] 

Wangda Tan commented on YARN-4488:
--

[~maniraj...@gmail.com], thanks for working on the patch, I took a glance at 
the patch but I think most changes are related to class import.

Could you elaborate:
1) What's the approach?
2) What changes needed? 
3) Apart from YARN, do we need to change other projects?

> CapacityScheduler: Compute per-container allocation latency and roll up to 
> get per-application and per-queue
> 
>
> Key: YARN-4488
> URL: https://issues.apache.org/jira/browse/YARN-4488
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Karthik Kambatla
>Assignee: Manikandan R
>Priority: Major
> Attachments: YARN-4485.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-4488) CapacityScheduler: Compute per-container allocation latency and roll up to get per-application and per-queue

2018-02-20 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan reassigned YARN-4488:


Assignee: Manikandan R  (was: Wangda Tan)

> CapacityScheduler: Compute per-container allocation latency and roll up to 
> get per-application and per-queue
> 
>
> Key: YARN-4488
> URL: https://issues.apache.org/jira/browse/YARN-4488
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Karthik Kambatla
>Assignee: Manikandan R
>Priority: Major
> Attachments: YARN-4485.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7732) Support Generic AM Simulator from SynthGenerator

2018-02-20 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16370881#comment-16370881
 ] 

Wangda Tan commented on YARN-7732:
--

[~curino], according to https://issues.apache.org/jira/browse/INFRA-15859, I 
think we don't need to commit changes to branch-3. The branch-3 should be 
removed.

> Support Generic AM Simulator from SynthGenerator
> 
>
> Key: YARN-7732
> URL: https://issues.apache.org/jira/browse/YARN-7732
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler-load-simulator
>Reporter: Young Chen
>Assignee: Young Chen
>Priority: Minor
> Attachments: YARN-7732-YARN-7798.01.patch, 
> YARN-7732-YARN-7798.02.patch, YARN-7732.01.patch, YARN-7732.02.patch, 
> YARN-7732.03.patch, YARN-7732.04.patch, YARN-7732.05.patch, YARN-7732.06.patch
>
>
> Extract the MapReduce specific set-up in the SLSRunner into the 
> MRAMSimulator, and enable support for pluggable AMSimulators.
> Previously, the AM set up in SLSRunner had the MRAMSimulator type hard coded, 
> for example startAMFromSynthGenerator() calls this:
>  
> {code:java}
> runNewAM(SLSUtils.DEFAULT_JOB_TYPE, user, jobQueue, oldJobId,
> jobStartTimeMS, jobFinishTimeMS, containerList, reservationId,
> job.getDeadline(), getAMContainerResource(null));
> {code}
> where SLSUtils.DEFAULT_JOB_TYPE = "mapreduce"
> The container set up was also only suitable for mapreduce: 
>  
> {code:java}
> Version:1.0 StartHTML:00286 EndHTML:12564 StartFragment:03634 
> EndFragment:12474 StartSelection:03700 EndSelection:12464 
> SourceURL:https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/SLSRunner.java
>  
> // map tasks
> for (int i = 0; i < job.getNumberMaps(); i++) {
>   TaskAttemptInfo tai = job.getTaskAttemptInfo(TaskType.MAP, i, 0);
>   RMNode node =
>   nmMap.get(keyAsArray.get(rand.nextInt(keyAsArray.size(
>   .getNode();
>   String hostname = "/" + node.getRackName() + "/" + node.getHostName();
>   long containerLifeTime = tai.getRuntime();
>   Resource containerResource =
>   Resource.newInstance((int) tai.getTaskInfo().getTaskMemory(),
>   (int) tai.getTaskInfo().getTaskVCores());
>   containerList.add(new ContainerSimulator(containerResource,
>   containerLifeTime, hostname, DEFAULT_MAPPER_PRIORITY, "map"));
> }
> // reduce tasks
> for (int i = 0; i < job.getNumberReduces(); i++) {
>   TaskAttemptInfo tai = job.getTaskAttemptInfo(TaskType.REDUCE, i, 0);
>   RMNode node =
>   nmMap.get(keyAsArray.get(rand.nextInt(keyAsArray.size(
>   .getNode();
>   String hostname = "/" + node.getRackName() + "/" + node.getHostName();
>   long containerLifeTime = tai.getRuntime();
>   Resource containerResource =
>   Resource.newInstance((int) tai.getTaskInfo().getTaskMemory(),
>   (int) tai.getTaskInfo().getTaskVCores());
>   containerList.add(
>   new ContainerSimulator(containerResource, containerLifeTime,
>   hostname, DEFAULT_REDUCER_PRIORITY, "reduce"));
> }
> {code}
>  
> In addition, the syn.json format supported only mapreduce (the parameters 
> were very specific: mtime, rtime, mtasks, rtasks, etc..).
> This patch aims to introduce a new syn.json format that can describe generic 
> jobs, and the SLS setup required to support the synth generation of generic 
> jobs.
> See syn_generic.json for an equivalent of the previous syn.json in the new 
> format.
> Using the new generic format, we describe a StreamAMSimulator simulates a 
> long running streaming service that maintains N number of containers for the 
> lifetime of the AM. See syn_stream.json.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7707) [GPG] Policy generator framework

2018-02-20 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16370867#comment-16370867
 ] 

genericqa commented on YARN-7707:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-7402 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
33s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
 8s{color} | {color:green} YARN-7402 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
58s{color} | {color:green} YARN-7402 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 9s{color} | {color:green} YARN-7402 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
33s{color} | {color:green} YARN-7402 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m  6s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
23s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api in 
YARN-7402 has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
15s{color} | {color:green} YARN-7402 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
48s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m  0s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 1 new + 226 unchanged - 0 fixed = 227 total (was 226) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 17s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
25s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
41s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
23s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
14s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
34s{color} | {color:green} hadoop-yarn-server-globalpolicygenerator in the 
patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 91m 11s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7707 |
| JIRA Patch URL | 

[jira] [Commented] (YARN-7708) [GPG] Load based policy generator

2018-02-20 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16370841#comment-16370841
 ] 

genericqa commented on YARN-7708:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-7402 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  3m  
1s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
27s{color} | {color:green} YARN-7402 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
54s{color} | {color:green} YARN-7402 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 4s{color} | {color:green} YARN-7402 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
28s{color} | {color:green} YARN-7402 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m  6s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
29s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api in 
YARN-7402 has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m  
8s{color} | {color:green} YARN-7402 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 
44s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m  6s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 59 new + 226 unchanged - 0 fixed = 285 total (was 226) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
3s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  5s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
51s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator
 generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
41s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
48s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
26s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
24s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
43s{color} | {color:green} hadoop-yarn-server-globalpolicygenerator in the 
patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
44s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}102m  4s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | 

[jira] [Commented] (YARN-7929) SLS supports setting container execution

2018-02-20 Thread Young Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16370834#comment-16370834
 ] 

Young Chen commented on YARN-7929:
--

Hi [~yangjiandan], I just committed a patch to enable generic AM simulators for 
synthetic load generation, and after giving this patch a quick read through, I 
think you'll run into some merging issues. Let me know if you have any 
questions about the code I committed, I'll be happy to help.

On a side note, I was actually looking to enable Opportunistic containers in 
SLS, so I'm very much looking forward to this patch!

Quick question: What's the purpose of adding the "water level" to the 
NMSimulator?

 
{code:java}
    if (waterLevel > 0 && waterLevel <=1) {
  int pMemUsed = (int) (node.getTotalCapability().getMemorySize() * 
waterLevel);
  float cpuUsed = node.getTotalCapability().getVirtualCores() * waterLevel;
  ResourceUtilization resourceUtilization = 
ResourceUtilization.newInstance(pMemUsed, pMemUsed, cpuUsed);
  ns.setContainersUtilization(resourceUtilization);
  ns.setNodeUtilization(resourceUtilization);
    }
{code}
 

> SLS supports setting container execution
> 
>
> Key: YARN-7929
> URL: https://issues.apache.org/jira/browse/YARN-7929
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler-load-simulator
>Reporter: Jiandan Yang 
>Assignee: Jiandan Yang 
>Priority: Major
> Attachments: YARN-7929.001.patch
>
>
> SLS currently support three tracetype, SYNTH, SLS and RUMEN, but trace file 
> can not set execution type of container.
>  This jira will introduce execution type in SLS to help better simulation. 
> This will help the perf testing with regarding to the Opportunistic 
> Containers.
>  RUMEN has default execution type GUARANTEED
>  SYNTH set execution type by field map_execution_type and 
> reduce_execution_type
>  SLS set execution type by field container.execution_type
>  For compatibility set GUARANTEED as default value when not setting above 
> fields in trace file



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7950) Add dir permission related information in documentation for secure yarn setup

2018-02-20 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated YARN-7950:
-
Description: 
Yarn throws below error while setting up secure cluster. We should document 
that parent dir also should have specific owners (i.e root).

{code}ERROR org.apache.hadoop.yarn.server.nodemanager.NodeManager: Error 
starting NodeManager
org.apache.hadoop.yarn.exceptions.YarnRuntimeException: Failed to initialize 
container executor
at 
org.apache.hadoop.yarn.server.nodemanager.NodeManager.serviceInit(NodeManager.java:388)
at 
org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
at 
org.apache.hadoop.yarn.server.nodemanager.NodeManager.initAndStartNodeManager(NodeManager.java:899)
at 
org.apache.hadoop.yarn.server.nodemanager.NodeManager.main(NodeManager.java:960)
Caused by: java.io.IOException: Linux container executor not configured 
properly (error=24)
at 
org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor.init(LinuxContainerExecutor.java:306)
at 
org.apache.hadoop.yarn.server.nodemanager.NodeManager.serviceInit(NodeManager.java:386)
... 3 more
Caused by: 
org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.privileged.PrivilegedOperationException:
 ExitCodeException exitCode=24: File /opt/hadoop/hadoop-3.2.0-SNAPSHOT must be 
owned by root, but is owned by 1001{code}

  was:

{code}ERROR org.apache.hadoop.yarn.server.nodemanager.NodeManager: Error 
starting NodeManager
org.apache.hadoop.yarn.exceptions.YarnRuntimeException: Failed to initialize 
container executor
at 
org.apache.hadoop.yarn.server.nodemanager.NodeManager.serviceInit(NodeManager.java:388)
at 
org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
at 
org.apache.hadoop.yarn.server.nodemanager.NodeManager.initAndStartNodeManager(NodeManager.java:899)
at 
org.apache.hadoop.yarn.server.nodemanager.NodeManager.main(NodeManager.java:960)
Caused by: java.io.IOException: Linux container executor not configured 
properly (error=24)
at 
org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor.init(LinuxContainerExecutor.java:306)
at 
org.apache.hadoop.yarn.server.nodemanager.NodeManager.serviceInit(NodeManager.java:386)
... 3 more
Caused by: 
org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.privileged.PrivilegedOperationException:
 ExitCodeException exitCode=24: File /opt/hadoop/hadoop-3.2.0-SNAPSHOT must be 
owned by root, but is owned by 1001{code}


> Add dir permission related information in documentation for secure yarn setup
> -
>
> Key: YARN-7950
> URL: https://issues.apache.org/jira/browse/YARN-7950
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Priority: Major
>
> Yarn throws below error while setting up secure cluster. We should document 
> that parent dir also should have specific owners (i.e root).
> {code}ERROR org.apache.hadoop.yarn.server.nodemanager.NodeManager: Error 
> starting NodeManager
> org.apache.hadoop.yarn.exceptions.YarnRuntimeException: Failed to initialize 
> container executor
>   at 
> org.apache.hadoop.yarn.server.nodemanager.NodeManager.serviceInit(NodeManager.java:388)
>   at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.NodeManager.initAndStartNodeManager(NodeManager.java:899)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.NodeManager.main(NodeManager.java:960)
> Caused by: java.io.IOException: Linux container executor not configured 
> properly (error=24)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor.init(LinuxContainerExecutor.java:306)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.NodeManager.serviceInit(NodeManager.java:386)
>   ... 3 more
> Caused by: 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.privileged.PrivilegedOperationException:
>  ExitCodeException exitCode=24: File /opt/hadoop/hadoop-3.2.0-SNAPSHOT must 
> be owned by root, but is owned by 1001{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7732) Support Generic AM Simulator from SynthGenerator

2018-02-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16370825#comment-16370825
 ] 

Hudson commented on YARN-7732:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13689 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13689/])
YARN-7732. Support Generic AM Simulator from SynthGenerator. (carlo curino: rev 
84cea0011ffe510d24cf9f2952944f7a6fe622cf)
* (edit) hadoop-tools/hadoop-sls/pom.xml
* (add) hadoop-tools/hadoop-sls/src/test/resources/syn_generic.json
* (edit) 
hadoop-tools/hadoop-sls/src/test/java/org/apache/hadoop/yarn/sls/TestSynthJobGeneration.java
* (edit) 
hadoop-tools/hadoop-sls/src/test/java/org/apache/hadoop/yarn/sls/BaseSLSRunnerTest.java
* (edit) 
hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/synthetic/SynthTraceJobProducer.java
* (add) 
hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/appmaster/StreamAMSimulator.java
* (delete) 
hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/synthetic/SynthJobClass.java
* (add) hadoop-tools/hadoop-sls/src/test/resources/syn_stream.json
* (edit) 
hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/appmaster/AMSimulator.java
* (edit) 
hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/synthetic/SynthJob.java
* (edit) hadoop-tools/hadoop-sls/src/test/resources/syn.json
* (edit) 
hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/SLSRunner.java
* (delete) 
hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/synthetic/SynthWorkload.java
* (edit) 
hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/appmaster/MRAMSimulator.java
* (edit) 
hadoop-tools/hadoop-sls/src/test/java/org/apache/hadoop/yarn/sls/appmaster/TestAMSimulator.java
* (add) 
hadoop-tools/hadoop-sls/src/test/java/org/apache/hadoop/yarn/sls/TestSLSStreamAMSynth.java
* (add) 
hadoop-tools/hadoop-sls/src/test/java/org/apache/hadoop/yarn/sls/TestSLSGenericSynth.java
* (add) 
hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/appmaster/package-info.java
* (edit) hadoop-tools/hadoop-sls/src/test/resources/sls-runner.xml


> Support Generic AM Simulator from SynthGenerator
> 
>
> Key: YARN-7732
> URL: https://issues.apache.org/jira/browse/YARN-7732
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler-load-simulator
>Reporter: Young Chen
>Assignee: Young Chen
>Priority: Minor
> Attachments: YARN-7732-YARN-7798.01.patch, 
> YARN-7732-YARN-7798.02.patch, YARN-7732.01.patch, YARN-7732.02.patch, 
> YARN-7732.03.patch, YARN-7732.04.patch, YARN-7732.05.patch, YARN-7732.06.patch
>
>
> Extract the MapReduce specific set-up in the SLSRunner into the 
> MRAMSimulator, and enable support for pluggable AMSimulators.
> Previously, the AM set up in SLSRunner had the MRAMSimulator type hard coded, 
> for example startAMFromSynthGenerator() calls this:
>  
> {code:java}
> runNewAM(SLSUtils.DEFAULT_JOB_TYPE, user, jobQueue, oldJobId,
> jobStartTimeMS, jobFinishTimeMS, containerList, reservationId,
> job.getDeadline(), getAMContainerResource(null));
> {code}
> where SLSUtils.DEFAULT_JOB_TYPE = "mapreduce"
> The container set up was also only suitable for mapreduce: 
>  
> {code:java}
> Version:1.0 StartHTML:00286 EndHTML:12564 StartFragment:03634 
> EndFragment:12474 StartSelection:03700 EndSelection:12464 
> SourceURL:https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/SLSRunner.java
>  
> // map tasks
> for (int i = 0; i < job.getNumberMaps(); i++) {
>   TaskAttemptInfo tai = job.getTaskAttemptInfo(TaskType.MAP, i, 0);
>   RMNode node =
>   nmMap.get(keyAsArray.get(rand.nextInt(keyAsArray.size(
>   .getNode();
>   String hostname = "/" + node.getRackName() + "/" + node.getHostName();
>   long containerLifeTime = tai.getRuntime();
>   Resource containerResource =
>   Resource.newInstance((int) tai.getTaskInfo().getTaskMemory(),
>   (int) tai.getTaskInfo().getTaskVCores());
>   containerList.add(new ContainerSimulator(containerResource,
>   containerLifeTime, hostname, DEFAULT_MAPPER_PRIORITY, "map"));
> }
> // reduce tasks
> for (int i = 0; i < job.getNumberReduces(); i++) {
>   TaskAttemptInfo tai = job.getTaskAttemptInfo(TaskType.REDUCE, i, 0);
>   RMNode node =
>   nmMap.get(keyAsArray.get(rand.nextInt(keyAsArray.size(
>   .getNode();
>   String hostname = "/" + node.getRackName() + "/" + node.getHostName();
>   long containerLifeTime = tai.getRuntime();
>   Resource containerResource =
>   Resource.newInstance((int) 

[jira] [Commented] (YARN-7798) Refactor SLS Reservation Creation

2018-02-20 Thread Carlo Curino (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16370822#comment-16370822
 ] 

Carlo Curino commented on YARN-7798:


Cherry-picked back to branch-3 with a clean cherry-pick (and spot checks of SLS 
tests running fine)

> Refactor SLS Reservation Creation
> -
>
> Key: YARN-7798
> URL: https://issues.apache.org/jira/browse/YARN-7798
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Young Chen
>Assignee: Young Chen
>Priority: Minor
> Fix For: 3.1.0
>
> Attachments: YARN-7798.01.patch, YARN-7798.02.patch, 
> YARN-7798.03.patch
>
>
> Move the reservation request creation out of SLSRunner and delegate to the 
> AMSimulator instance.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7732) Support Generic AM Simulator from SynthGenerator

2018-02-20 Thread Carlo Curino (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16370821#comment-16370821
 ] 

Carlo Curino commented on YARN-7732:


Thanks [~youchen] for the contribution, and [~leftnoteasy] for reviewing. I 
committed this to trunk, and cherry picked back this patch (and YARN-7798) to 
branch-3, since it was a clean cherry-pick and spot runs of SLS tests look 
good.  
[~leftnoteasy] and [~yufeigu], if you see issue with this cherry-pick let me 
know we can easily revert, I would like as much as possible to have all the SLS 
newer magic available in all branches, as it is very useful for 
regression/integration/performance testing.

 

[~youchen] can you see why YARN-7798 does not apply to branch-2, it might be a 
very simple fix, in which case, please provide a patch for both YARN-7798 and 
YARN-7732 that works in branch-2, so we an backport there as well.

> Support Generic AM Simulator from SynthGenerator
> 
>
> Key: YARN-7732
> URL: https://issues.apache.org/jira/browse/YARN-7732
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler-load-simulator
>Reporter: Young Chen
>Assignee: Young Chen
>Priority: Minor
> Attachments: YARN-7732-YARN-7798.01.patch, 
> YARN-7732-YARN-7798.02.patch, YARN-7732.01.patch, YARN-7732.02.patch, 
> YARN-7732.03.patch, YARN-7732.04.patch, YARN-7732.05.patch, YARN-7732.06.patch
>
>
> Extract the MapReduce specific set-up in the SLSRunner into the 
> MRAMSimulator, and enable support for pluggable AMSimulators.
> Previously, the AM set up in SLSRunner had the MRAMSimulator type hard coded, 
> for example startAMFromSynthGenerator() calls this:
>  
> {code:java}
> runNewAM(SLSUtils.DEFAULT_JOB_TYPE, user, jobQueue, oldJobId,
> jobStartTimeMS, jobFinishTimeMS, containerList, reservationId,
> job.getDeadline(), getAMContainerResource(null));
> {code}
> where SLSUtils.DEFAULT_JOB_TYPE = "mapreduce"
> The container set up was also only suitable for mapreduce: 
>  
> {code:java}
> Version:1.0 StartHTML:00286 EndHTML:12564 StartFragment:03634 
> EndFragment:12474 StartSelection:03700 EndSelection:12464 
> SourceURL:https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/SLSRunner.java
>  
> // map tasks
> for (int i = 0; i < job.getNumberMaps(); i++) {
>   TaskAttemptInfo tai = job.getTaskAttemptInfo(TaskType.MAP, i, 0);
>   RMNode node =
>   nmMap.get(keyAsArray.get(rand.nextInt(keyAsArray.size(
>   .getNode();
>   String hostname = "/" + node.getRackName() + "/" + node.getHostName();
>   long containerLifeTime = tai.getRuntime();
>   Resource containerResource =
>   Resource.newInstance((int) tai.getTaskInfo().getTaskMemory(),
>   (int) tai.getTaskInfo().getTaskVCores());
>   containerList.add(new ContainerSimulator(containerResource,
>   containerLifeTime, hostname, DEFAULT_MAPPER_PRIORITY, "map"));
> }
> // reduce tasks
> for (int i = 0; i < job.getNumberReduces(); i++) {
>   TaskAttemptInfo tai = job.getTaskAttemptInfo(TaskType.REDUCE, i, 0);
>   RMNode node =
>   nmMap.get(keyAsArray.get(rand.nextInt(keyAsArray.size(
>   .getNode();
>   String hostname = "/" + node.getRackName() + "/" + node.getHostName();
>   long containerLifeTime = tai.getRuntime();
>   Resource containerResource =
>   Resource.newInstance((int) tai.getTaskInfo().getTaskMemory(),
>   (int) tai.getTaskInfo().getTaskVCores());
>   containerList.add(
>   new ContainerSimulator(containerResource, containerLifeTime,
>   hostname, DEFAULT_REDUCER_PRIORITY, "reduce"));
> }
> {code}
>  
> In addition, the syn.json format supported only mapreduce (the parameters 
> were very specific: mtime, rtime, mtasks, rtasks, etc..).
> This patch aims to introduce a new syn.json format that can describe generic 
> jobs, and the SLS setup required to support the synth generation of generic 
> jobs.
> See syn_generic.json for an equivalent of the previous syn.json in the new 
> format.
> Using the new generic format, we describe a StreamAMSimulator simulates a 
> long running streaming service that maintains N number of containers for the 
> lifetime of the AM. See syn_stream.json.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7950) Add dir permission related information in documentation for secure yarn setup

2018-02-20 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated YARN-7950:
-
Description: 

{code}ERROR org.apache.hadoop.yarn.server.nodemanager.NodeManager: Error 
starting NodeManager
org.apache.hadoop.yarn.exceptions.YarnRuntimeException: Failed to initialize 
container executor
at 
org.apache.hadoop.yarn.server.nodemanager.NodeManager.serviceInit(NodeManager.java:388)
at 
org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
at 
org.apache.hadoop.yarn.server.nodemanager.NodeManager.initAndStartNodeManager(NodeManager.java:899)
at 
org.apache.hadoop.yarn.server.nodemanager.NodeManager.main(NodeManager.java:960)
Caused by: java.io.IOException: Linux container executor not configured 
properly (error=24)
at 
org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor.init(LinuxContainerExecutor.java:306)
at 
org.apache.hadoop.yarn.server.nodemanager.NodeManager.serviceInit(NodeManager.java:386)
... 3 more
Caused by: 
org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.privileged.PrivilegedOperationException:
 ExitCodeException exitCode=24: File /opt/hadoop/hadoop-3.2.0-SNAPSHOT must be 
owned by root, but is owned by 1001{code}

> Add dir permission related information in documentation for secure yarn setup
> -
>
> Key: YARN-7950
> URL: https://issues.apache.org/jira/browse/YARN-7950
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Priority: Major
>
> {code}ERROR org.apache.hadoop.yarn.server.nodemanager.NodeManager: Error 
> starting NodeManager
> org.apache.hadoop.yarn.exceptions.YarnRuntimeException: Failed to initialize 
> container executor
>   at 
> org.apache.hadoop.yarn.server.nodemanager.NodeManager.serviceInit(NodeManager.java:388)
>   at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.NodeManager.initAndStartNodeManager(NodeManager.java:899)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.NodeManager.main(NodeManager.java:960)
> Caused by: java.io.IOException: Linux container executor not configured 
> properly (error=24)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor.init(LinuxContainerExecutor.java:306)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.NodeManager.serviceInit(NodeManager.java:386)
>   ... 3 more
> Caused by: 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.privileged.PrivilegedOperationException:
>  ExitCodeException exitCode=24: File /opt/hadoop/hadoop-3.2.0-SNAPSHOT must 
> be owned by root, but is owned by 1001{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7950) Add dir permission related information in documentation for secure yarn setup

2018-02-20 Thread Ajay Kumar (JIRA)
Ajay Kumar created YARN-7950:


 Summary: Add dir permission related information in documentation 
for secure yarn setup
 Key: YARN-7950
 URL: https://issues.apache.org/jira/browse/YARN-7950
 Project: Hadoop YARN
  Issue Type: Improvement
Reporter: Ajay Kumar






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7945) Java Doc error in UnmanagedAMPoolManager for branch-2

2018-02-20 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7945?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16370814#comment-16370814
 ] 

genericqa commented on YARN-7945:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
29s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} branch-2 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 
 1s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
29s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
57s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} branch-2 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 12s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common: 
The patch generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-common 
generated 0 new + 0 unchanged - 2 fixed = 0 total (was 2) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m  
9s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 19m 31s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:17213a0 |
| JIRA Issue | YARN-7945 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12911302/YARN-7945-branch-2.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux be31dc82ae06 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | branch-2 / 49ed7d7 |
| maven | version: Apache Maven 3.3.9 
(bb52d8502b132ec0a5a3f4c09453c07478323dc5; 2015-11-10T16:41:47+00:00) |
| Default Java | 1.7.0_151 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/19752/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/19752/testReport/ |
| Max. process+thread count | 93 (vs. ulimit of 5500) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common |
| 

[jira] [Updated] (YARN-7707) [GPG] Policy generator framework

2018-02-20 Thread Young Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7707?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Young Chen updated YARN-7707:
-
Attachment: YARN-7707-YARN-7402.05.patch

> [GPG] Policy generator framework
> 
>
> Key: YARN-7707
> URL: https://issues.apache.org/jira/browse/YARN-7707
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Carlo Curino
>Assignee: Young Chen
>Priority: Major
>  Labels: federation, gpg
> Attachments: YARN-7707-YARN-7402.01.patch, 
> YARN-7707-YARN-7402.02.patch, YARN-7707-YARN-7402.03.patch, 
> YARN-7707-YARN-7402.04.patch, YARN-7707-YARN-7402.05.patch
>
>
> This JIRA tracks the development of a generic framework for querying 
> sub-clusters for metrics, running policies, and updating them in the 
> FederationStateStore.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7707) [GPG] Policy generator framework

2018-02-20 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16370799#comment-16370799
 ] 

genericqa commented on YARN-7707:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-7402 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
53s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
 3s{color} | {color:green} YARN-7402 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
43s{color} | {color:green} YARN-7402 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 3s{color} | {color:green} YARN-7402 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
30s{color} | {color:green} YARN-7402 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 16s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
14s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api in 
YARN-7402 has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
43s{color} | {color:green} YARN-7402 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
59s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 52s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 1 new + 226 unchanged - 0 fixed = 227 total (was 226) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
3s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 12s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
4s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
35s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m  
8s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
16s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
39s{color} | {color:green} hadoop-yarn-server-globalpolicygenerator in the 
patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 82m 56s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7707 |
| JIRA Patch URL | 

[jira] [Updated] (YARN-7945) Java Doc error in UnmanagedAMPoolManager for branch-2

2018-02-20 Thread Botong Huang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7945?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Botong Huang updated YARN-7945:
---
Attachment: YARN-7945-branch-2.001.patch

> Java Doc error in UnmanagedAMPoolManager for branch-2
> -
>
> Key: YARN-7945
> URL: https://issues.apache.org/jira/browse/YARN-7945
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.10.0, 2.9.1
>Reporter: Rohith Sharma K S
>Assignee: Botong Huang
>Priority: Major
> Attachments: YARN-7945-branch-2.001.patch
>
>
> In branch-2, I see an java doc error while building package. 
> {code}
> [ERROR] 
> /Users/rsharmaks/Repos/Apache/Commit_Repos/branch-2/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/uam/UnmanagedAMPoolManager.java:151:
>  error: reference not found
> [ERROR]* @see ApplicationSubmissionContext
> [ERROR]   ^
> [ERROR] 
> /Users/rsharmaks/Repos/Apache/Commit_Repos/branch-2/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/uam/UnmanagedAMPoolManager.java:204:
>  error: reference not found
> [ERROR]* @see ApplicationSubmissionContext
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-7945) Java Doc error in UnmanagedAMPoolManager for branch-2

2018-02-20 Thread Botong Huang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7945?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Botong Huang reassigned YARN-7945:
--

Assignee: Botong Huang

> Java Doc error in UnmanagedAMPoolManager for branch-2
> -
>
> Key: YARN-7945
> URL: https://issues.apache.org/jira/browse/YARN-7945
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.10.0, 2.9.1
>Reporter: Rohith Sharma K S
>Assignee: Botong Huang
>Priority: Major
>
> In branch-2, I see an java doc error while building package. 
> {code}
> [ERROR] 
> /Users/rsharmaks/Repos/Apache/Commit_Repos/branch-2/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/uam/UnmanagedAMPoolManager.java:151:
>  error: reference not found
> [ERROR]* @see ApplicationSubmissionContext
> [ERROR]   ^
> [ERROR] 
> /Users/rsharmaks/Repos/Apache/Commit_Repos/branch-2/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/uam/UnmanagedAMPoolManager.java:204:
>  error: reference not found
> [ERROR]* @see ApplicationSubmissionContext
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7945) Java Doc error in UnmanagedAMPoolManager for branch-2

2018-02-20 Thread Botong Huang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7945?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Botong Huang updated YARN-7945:
---
Attachment: (was: YARN-7945.001.patch)

> Java Doc error in UnmanagedAMPoolManager for branch-2
> -
>
> Key: YARN-7945
> URL: https://issues.apache.org/jira/browse/YARN-7945
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.10.0, 2.9.1
>Reporter: Rohith Sharma K S
>Assignee: Botong Huang
>Priority: Major
>
> In branch-2, I see an java doc error while building package. 
> {code}
> [ERROR] 
> /Users/rsharmaks/Repos/Apache/Commit_Repos/branch-2/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/uam/UnmanagedAMPoolManager.java:151:
>  error: reference not found
> [ERROR]* @see ApplicationSubmissionContext
> [ERROR]   ^
> [ERROR] 
> /Users/rsharmaks/Repos/Apache/Commit_Repos/branch-2/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/uam/UnmanagedAMPoolManager.java:204:
>  error: reference not found
> [ERROR]* @see ApplicationSubmissionContext
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7945) Java Doc error in UnmanagedAMPoolManager for branch-2

2018-02-20 Thread Botong Huang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7945?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Botong Huang updated YARN-7945:
---
Attachment: YARN-7945.001.patch

> Java Doc error in UnmanagedAMPoolManager for branch-2
> -
>
> Key: YARN-7945
> URL: https://issues.apache.org/jira/browse/YARN-7945
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.10.0, 2.9.1
>Reporter: Rohith Sharma K S
>Priority: Major
>
> In branch-2, I see an java doc error while building package. 
> {code}
> [ERROR] 
> /Users/rsharmaks/Repos/Apache/Commit_Repos/branch-2/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/uam/UnmanagedAMPoolManager.java:151:
>  error: reference not found
> [ERROR]* @see ApplicationSubmissionContext
> [ERROR]   ^
> [ERROR] 
> /Users/rsharmaks/Repos/Apache/Commit_Repos/branch-2/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/uam/UnmanagedAMPoolManager.java:204:
>  error: reference not found
> [ERROR]* @see ApplicationSubmissionContext
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7708) [GPG] Load based policy generator

2018-02-20 Thread Young Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16370783#comment-16370783
 ] 

Young Chen edited comment on YARN-7708 at 2/21/18 12:24 AM:


The rationale for this policy is to lower load on sub clusters that are 
receiving an abnormal amount of jobs or are unable to keep up for performance 
reasons. To alleviate job failures/excessive delays, this policy attempts to 
redirect load away from highly loaded sub clusters by updating a 
WeightedLocalityPolicyManager with modified weights.

This patch introduces the LoadBasedGlobalPolicy. It's configurable in the 
following ways:
 * MAX_EDIT: The maximum number of sub clusters weights this policy will edit. 
The top N clusters by pending load will have their policy weights scaled down.
 * MIN_PENDING: The minimum number of pending applications before a sub cluster 
weight qualifies for editing.
 * MAX_PENDING: The maximum number of pending applications in the scalable 
range. Sub clusters exceeding this will have their policy weights set to 
MIN_WEIGHT
 * MIN_WEIGHT: The minimum weight possible for a highly loaded sub cluster.
 * SCALING: The scaling method that maps pending load to sub cluster policy 
weight. Currently there are three scaling methods: quadratic, log, and linear. 


was (Author: youchen):
The rationale for this policy is to lower load on sub clusters that are 
receiving an abnormal amount of jobs or are unable to keep up for performance 
reasons. To alleviate job failures/excessive delays, this policy attempts to 
redirect load away from highly loaded sub clusters by updating a 
WeightedLocalityPolicyManager with modified weights.

This patch introduces the LoadBasedGlobalPolicy. It's configurable in the 
following ways:
 * MAX_EDIT: The maximum number of sub clusters this policy will throttle. The 
top N clusters by pending load will have their policy weights scaled down.
 * MIN_PENDING: The minimum number of pending applications before a sub cluster 
weight qualifies for editing.
 * MAX_PENDING: The maximum number of pending applications in the scalable 
range. Sub clusters exceeding this will have their policy weights set to 
MIN_WEIGHT
 * MIN_WEIGHT: The minimum weight possible for a highly loaded sub cluster.
 * SCALING: The scaling method that maps pending load to sub cluster policy 
weight. Currently there are three scaling methods: quadratic, log, and linear. 

> [GPG] Load based policy generator
> -
>
> Key: YARN-7708
> URL: https://issues.apache.org/jira/browse/YARN-7708
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Carlo Curino
>Assignee: Young Chen
>Priority: Major
> Attachments: YARN-7708-YARN-7402.01.cumulative.patch
>
>
> This policy reads load from the "pendingQueueLength" metrics and provides 
> scaling into a set of weights that influence the AMRMProxy and Router 
> behaviors.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7708) [GPG] Load based policy generator

2018-02-20 Thread Young Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16370783#comment-16370783
 ] 

Young Chen edited comment on YARN-7708 at 2/21/18 12:24 AM:


The rationale for this policy is to lower load on sub clusters that are 
receiving an abnormal amount of jobs or are unable to keep up for performance 
reasons. To alleviate job failures/excessive delays, this policy attempts to 
redirect load away from highly loaded sub clusters by updating a 
WeightedLocalityPolicyManager with modified weights.

This patch introduces the LoadBasedGlobalPolicy. It's configurable in the 
following ways:
 * MAX_EDIT: The maximum number of sub clusters this policy will throttle. The 
top N clusters by pending load will have their policy weights scaled down.
 * MIN_PENDING: The minimum number of pending applications before a sub cluster 
weight qualifies for editing.
 * MAX_PENDING: The maximum number of pending applications in the scalable 
range. Sub clusters exceeding this will have their policy weights set to 
MIN_WEIGHT
 * MIN_WEIGHT: The minimum weight possible for a highly loaded sub cluster.
 * SCALING: The scaling method that maps pending load to sub cluster policy 
weight. Currently there are three scaling methods: quadratic, log, and linear. 


was (Author: youchen):
This patch introduces the LoadBasedGlobalPolicy. It's configurable in the 
following ways:
 * MAX_EDIT: The maximum number of sub clusters this policy will throttle. The 
top N clusters by pending load will have their policy weights scaled down.
 * MIN_PENDING: The minimum number of pending applications before a sub cluster 
weight qualifies for editing.
 * MAX_PENDING: The maximum number of pending applications in the scalable 
range. Sub clusters exceeding this will have their policy weights set to 
MIN_WEIGHT
 * MIN_WEIGHT: The minimum weight possible for a highly loaded sub cluster.
 * SCALING: The scaling method that maps pending load to sub cluster policy 
weight. Currently there are three scaling methods: quadratic, log, and linear. 

> [GPG] Load based policy generator
> -
>
> Key: YARN-7708
> URL: https://issues.apache.org/jira/browse/YARN-7708
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Carlo Curino
>Assignee: Young Chen
>Priority: Major
> Attachments: YARN-7708-YARN-7402.01.cumulative.patch
>
>
> This policy reads load from the "pendingQueueLength" metrics and provides 
> scaling into a set of weights that influence the AMRMProxy and Router 
> behaviors.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7942) Yarn ServiceClient does not not delete znode from secure ZooKeeper

2018-02-20 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16370782#comment-16370782
 ] 

genericqa commented on YARN-7942:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 50s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 12s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core:
 The patch generated 3 new + 17 unchanged - 1 fixed = 20 total (was 18) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 28s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 34m 13s{color} 
| {color:red} hadoop-yarn-services-core in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 80m  6s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7942 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12911290/YARN-7942.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 680b7f6260d2 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 9028cca |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/19749/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-services_hadoop-yarn-services-core.txt
 |
| unit | 

[jira] [Commented] (YARN-7708) [GPG] Load based policy generator

2018-02-20 Thread Young Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16370783#comment-16370783
 ] 

Young Chen commented on YARN-7708:
--

This patch introduces the LoadBasedGlobalPolicy. It's configurable in the 
following ways:
 * MAX_EDIT: The maximum number of sub clusters this policy will throttle. The 
top N clusters by pending load will have their policy weights scaled down.
 * MIN_PENDING: The minimum number of pending applications before a sub cluster 
weight qualifies for editing.
 * MAX_PENDING: The maximum number of pending applications in the scalable 
range. Sub clusters exceeding this will have their policy weights set to 
MIN_WEIGHT
 * MIN_WEIGHT: The minimum weight possible for a highly loaded sub cluster.
 * SCALING: The scaling method that maps pending load to sub cluster policy 
weight. Currently there are three scaling methods: quadratic, log, and linear. 

> [GPG] Load based policy generator
> -
>
> Key: YARN-7708
> URL: https://issues.apache.org/jira/browse/YARN-7708
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Carlo Curino
>Assignee: Young Chen
>Priority: Major
> Attachments: YARN-7708-YARN-7402.01.cumulative.patch
>
>
> This policy reads load from the "pendingQueueLength" metrics and provides 
> scaling into a set of weights that influence the AMRMProxy and Router 
> behaviors.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7945) Java Doc error in UnmanagedAMPoolManager for branch-2

2018-02-20 Thread Subru Krishnan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7945?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16370770#comment-16370770
 ] 

Subru Krishnan commented on YARN-7945:
--

[~rohithsharma]/[~jlowe], thanks for bringing it to my attention.

[~jlowe], I am not sure how the import got dropped as it's in the patch and we 
specifically ran yetus against branch-2 successfully before committing.

[~botong], do you want to provide the quick fix?

> Java Doc error in UnmanagedAMPoolManager for branch-2
> -
>
> Key: YARN-7945
> URL: https://issues.apache.org/jira/browse/YARN-7945
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.10.0, 2.9.1
>Reporter: Rohith Sharma K S
>Priority: Major
>
> In branch-2, I see an java doc error while building package. 
> {code}
> [ERROR] 
> /Users/rsharmaks/Repos/Apache/Commit_Repos/branch-2/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/uam/UnmanagedAMPoolManager.java:151:
>  error: reference not found
> [ERROR]* @see ApplicationSubmissionContext
> [ERROR]   ^
> [ERROR] 
> /Users/rsharmaks/Repos/Apache/Commit_Repos/branch-2/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/uam/UnmanagedAMPoolManager.java:204:
>  error: reference not found
> [ERROR]* @see ApplicationSubmissionContext
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7708) [GPG] Load based policy generator

2018-02-20 Thread Young Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16370773#comment-16370773
 ] 

Young Chen commented on YARN-7708:
--

Added a cumulative patch (7708 combined with 7707 while waiting for 7707 to 
merge into 7402).

> [GPG] Load based policy generator
> -
>
> Key: YARN-7708
> URL: https://issues.apache.org/jira/browse/YARN-7708
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Carlo Curino
>Assignee: Young Chen
>Priority: Major
> Attachments: YARN-7708-YARN-7402.01.cumulative.patch
>
>
> This policy reads load from the "pendingQueueLength" metrics and provides 
> scaling into a set of weights that influence the AMRMProxy and Router 
> behaviors.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7945) Java Doc error in UnmanagedAMPoolManager for branch-2

2018-02-20 Thread Subru Krishnan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7945?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16370770#comment-16370770
 ] 

Subru Krishnan edited comment on YARN-7945 at 2/21/18 12:02 AM:


[~rohithsharma]/[~jlowe], thanks for bringing it to my attention.

[~jlowe], I am not sure how the import got dropped as it's in the patch and we 
specifically ran yetus against branch-2 successfully before committing. Only 
likelihood is regression caused by trying to fix an unused import checkstyle 
warning at commit.

[~botong], do you want to provide the quick fix?


was (Author: subru):
[~rohithsharma]/[~jlowe], thanks for bringing it to my attention.

[~jlowe], I am not sure how the import got dropped as it's in the patch and we 
specifically ran yetus against branch-2 successfully before committing.

[~botong], do you want to provide the quick fix?

> Java Doc error in UnmanagedAMPoolManager for branch-2
> -
>
> Key: YARN-7945
> URL: https://issues.apache.org/jira/browse/YARN-7945
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.10.0, 2.9.1
>Reporter: Rohith Sharma K S
>Priority: Major
>
> In branch-2, I see an java doc error while building package. 
> {code}
> [ERROR] 
> /Users/rsharmaks/Repos/Apache/Commit_Repos/branch-2/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/uam/UnmanagedAMPoolManager.java:151:
>  error: reference not found
> [ERROR]* @see ApplicationSubmissionContext
> [ERROR]   ^
> [ERROR] 
> /Users/rsharmaks/Repos/Apache/Commit_Repos/branch-2/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/uam/UnmanagedAMPoolManager.java:204:
>  error: reference not found
> [ERROR]* @see ApplicationSubmissionContext
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7708) [GPG] Load based policy generator

2018-02-20 Thread Young Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7708?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Young Chen updated YARN-7708:
-
Attachment: YARN-7708-YARN-7402.01.cumulative.patch

> [GPG] Load based policy generator
> -
>
> Key: YARN-7708
> URL: https://issues.apache.org/jira/browse/YARN-7708
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Carlo Curino
>Assignee: Young Chen
>Priority: Major
> Attachments: YARN-7708-YARN-7402.01.cumulative.patch
>
>
> This policy reads load from the "pendingQueueLength" metrics and provides 
> scaling into a set of weights that influence the AMRMProxy and Router 
> behaviors.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-7949) ArtifactsId should not be a compulsory field for new service

2018-02-20 Thread Yesha Vora (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7949?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yesha Vora reassigned YARN-7949:


Assignee: Yesha Vora

> ArtifactsId should not be a compulsory field for new service
> 
>
> Key: YARN-7949
> URL: https://issues.apache.org/jira/browse/YARN-7949
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Affects Versions: 3.1.0
>Reporter: Yesha Vora
>Assignee: Yesha Vora
>Priority: Major
>
> 1) Click on New Service 
> 2) Create a component
> Create Component page has Artifacts Id as compulsory entry. Few yarn service 
> example such as sleeper.json does not need to provide artifacts id.
> {code:java|title=sleeper.json}
> {
>   "name": "sleeper-service",
>   "components" :
>   [
> {
>   "name": "sleeper",
>   "number_of_containers": 2,
>   "launch_command": "sleep 90",
>   "resource": {
> "cpus": 1,
> "memory": "256"
>   }
> }
>   ]
> }{code}
> Thus, artifactsId should not be compulsory field.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7949) ArtifactsId should not be a compulsory field for new service

2018-02-20 Thread Yesha Vora (JIRA)
Yesha Vora created YARN-7949:


 Summary: ArtifactsId should not be a compulsory field for new 
service
 Key: YARN-7949
 URL: https://issues.apache.org/jira/browse/YARN-7949
 Project: Hadoop YARN
  Issue Type: Bug
  Components: yarn-ui-v2
Affects Versions: 3.1.0
Reporter: Yesha Vora


1) Click on New Service 
2) Create a component

Create Component page has Artifacts Id as compulsory entry. Few yarn service 
example such as sleeper.json does not need to provide artifacts id.
{code:java|title=sleeper.json}
{
  "name": "sleeper-service",
  "components" :
  [
{
  "name": "sleeper",
  "number_of_containers": 2,
  "launch_command": "sleep 90",
  "resource": {
"cpus": 1,
"memory": "256"
  }
}
  ]
}{code}
Thus, artifactsId should not be compulsory field.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7346) Fix compilation errors against hbase2 beta release

2018-02-20 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16370728#comment-16370728
 ] 

Haibo Chen commented on YARN-7346:
--

+1 on option 2. But I am not sure how to pull in two versions of the same 
hbase-common/server/client dependency at the same time in a single maven run, 
given the fact that maven dependency check and dependency mediation only allows 
one version at a time. Suggestions?

> Fix compilation errors against hbase2 beta release
> --
>
> Key: YARN-7346
> URL: https://issues.apache.org/jira/browse/YARN-7346
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Ted Yu
>Assignee: Haibo Chen
>Priority: Major
> Attachments: YARN-7346.00.patch, YARN-7346.01.patch, 
> YARN-7346.02.patch, YARN-7346.03-incremental.patch, YARN-7346.03.patch, 
> YARN-7346.04.patch, YARN-7346.prelim1.patch, YARN-7346.prelim2.patch, 
> YARN-7581.prelim.patch
>
>
> When compiling hadoop-yarn-server-timelineservice-hbase against 2.0.0-alpha3, 
> I got the following errors:
> https://pastebin.com/Ms4jYEVB
> This issue is to fix the compilation errors.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7732) Support Generic AM Simulator from SynthGenerator

2018-02-20 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16370726#comment-16370726
 ] 

genericqa commented on YARN-7732:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
36s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 9 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  9s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} hadoop-tools/hadoop-sls: The patch generated 0 new + 
49 unchanged - 1 fixed = 49 total (was 50) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 36s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m 
16s{color} | {color:green} hadoop-sls in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 51m 52s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7732 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12911283/YARN-7732.06.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  xml  findbugs  checkstyle  |
| uname | Linux 2d05a4642d12 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 9028cca |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/19748/testReport/ |
| Max. process+thread count | 467 (vs. ulimit of 5500) |
| modules | C: hadoop-tools/hadoop-sls U: hadoop-tools/hadoop-sls |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/19748/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was 

[jira] [Updated] (YARN-7707) [GPG] Policy generator framework

2018-02-20 Thread Young Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7707?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Young Chen updated YARN-7707:
-
Attachment: YARN-7707-YARN-7402.04.patch

> [GPG] Policy generator framework
> 
>
> Key: YARN-7707
> URL: https://issues.apache.org/jira/browse/YARN-7707
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Carlo Curino
>Assignee: Young Chen
>Priority: Major
>  Labels: federation, gpg
> Attachments: YARN-7707-YARN-7402.01.patch, 
> YARN-7707-YARN-7402.02.patch, YARN-7707-YARN-7402.03.patch, 
> YARN-7707-YARN-7402.04.patch
>
>
> This JIRA tracks the development of a generic framework for querying 
> sub-clusters for metrics, running policies, and updating them in the 
> FederationStateStore.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-7942) Yarn ServiceClient does not not delete znode from secure ZooKeeper

2018-02-20 Thread Eric Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang reassigned YARN-7942:
---

Assignee: Eric Yang

> Yarn ServiceClient does not not delete znode from secure ZooKeeper
> --
>
> Key: YARN-7942
> URL: https://issues.apache.org/jira/browse/YARN-7942
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-native-services
>Affects Versions: 3.1.0
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Blocker
> Attachments: YARN-7942.001.patch
>
>
> Even with sasl:rm:cdrwa set on the ZK node (from the registry system accounts 
> property), the RM fails to remove the node with the below error. Also, the 
> destroy call succeeds.
> {code}
> 2018-02-16 15:49:29,691 WARN  client.ServiceClient 
> (ServiceClient.java:actionDestroy(470)) - Error deleting registry entry 
> /users/hbase/services/yarn-service/hbase-app-test
> org.apache.hadoop.registry.client.exceptions.NoPathPermissionsException: 
> `/registry/users/hbase/services/yarn-service/hbase-app-test': Not authorized 
> to access path; ACLs: [null ACL]: KeeperErrorCode = NoAuth for 
> /registry/users/hbase/services/yarn-service/hbase-app-test
> at 
> org.apache.hadoop.registry.client.impl.zk.CuratorService.operationFailure(CuratorService.java:412)
> at 
> org.apache.hadoop.registry.client.impl.zk.CuratorService.operationFailure(CuratorService.java:390)
> at 
> org.apache.hadoop.registry.client.impl.zk.CuratorService.zkDelete(CuratorService.java:722)
> at 
> org.apache.hadoop.registry.client.impl.zk.RegistryOperationsService.delete(RegistryOperationsService.java:162)
> at 
> org.apache.hadoop.yarn.service.client.ServiceClient.actionDestroy(ServiceClient.java:462)
> at 
> org.apache.hadoop.yarn.service.webapp.ApiServer$4.run(ApiServer.java:253)
> at 
> org.apache.hadoop.yarn.service.webapp.ApiServer$4.run(ApiServer.java:243)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1965)
> at 
> org.apache.hadoop.yarn.service.webapp.ApiServer.stopService(ApiServer.java:243)
> at 
> org.apache.hadoop.yarn.service.webapp.ApiServer.deleteService(ApiServer.java:223)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$ResponseOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:205)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
> at 
> com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:302)
> at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
> at 
> com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
> at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
> at 
> com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1542)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1473)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1419)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1409)
> at 
> com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:409)
> at 
> com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:558)
> at 
> com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:733)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
> at 
> org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:848)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1772)
> at 
> 

[jira] [Updated] (YARN-7942) Yarn ServiceClient does not not delete znode from secure ZooKeeper

2018-02-20 Thread Eric Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated YARN-7942:

Attachment: YARN-7942.001.patch

> Yarn ServiceClient does not not delete znode from secure ZooKeeper
> --
>
> Key: YARN-7942
> URL: https://issues.apache.org/jira/browse/YARN-7942
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-native-services
>Affects Versions: 3.1.0
>Reporter: Eric Yang
>Priority: Blocker
> Attachments: YARN-7942.001.patch
>
>
> Even with sasl:rm:cdrwa set on the ZK node (from the registry system accounts 
> property), the RM fails to remove the node with the below error. Also, the 
> destroy call succeeds.
> {code}
> 2018-02-16 15:49:29,691 WARN  client.ServiceClient 
> (ServiceClient.java:actionDestroy(470)) - Error deleting registry entry 
> /users/hbase/services/yarn-service/hbase-app-test
> org.apache.hadoop.registry.client.exceptions.NoPathPermissionsException: 
> `/registry/users/hbase/services/yarn-service/hbase-app-test': Not authorized 
> to access path; ACLs: [null ACL]: KeeperErrorCode = NoAuth for 
> /registry/users/hbase/services/yarn-service/hbase-app-test
> at 
> org.apache.hadoop.registry.client.impl.zk.CuratorService.operationFailure(CuratorService.java:412)
> at 
> org.apache.hadoop.registry.client.impl.zk.CuratorService.operationFailure(CuratorService.java:390)
> at 
> org.apache.hadoop.registry.client.impl.zk.CuratorService.zkDelete(CuratorService.java:722)
> at 
> org.apache.hadoop.registry.client.impl.zk.RegistryOperationsService.delete(RegistryOperationsService.java:162)
> at 
> org.apache.hadoop.yarn.service.client.ServiceClient.actionDestroy(ServiceClient.java:462)
> at 
> org.apache.hadoop.yarn.service.webapp.ApiServer$4.run(ApiServer.java:253)
> at 
> org.apache.hadoop.yarn.service.webapp.ApiServer$4.run(ApiServer.java:243)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1965)
> at 
> org.apache.hadoop.yarn.service.webapp.ApiServer.stopService(ApiServer.java:243)
> at 
> org.apache.hadoop.yarn.service.webapp.ApiServer.deleteService(ApiServer.java:223)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$ResponseOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:205)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
> at 
> com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:302)
> at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
> at 
> com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
> at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
> at 
> com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1542)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1473)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1419)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1409)
> at 
> com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:409)
> at 
> com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:558)
> at 
> com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:733)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
> at 
> org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:848)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1772)
> at 
> 

[jira] [Updated] (YARN-7940) Service AM gets NoAuth with secure ZK

2018-02-20 Thread Eric Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7940?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated YARN-7940:

Fix Version/s: 3.1.0

> Service AM gets NoAuth with secure ZK
> -
>
> Key: YARN-7940
> URL: https://issues.apache.org/jira/browse/YARN-7940
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-native-services
>Reporter: Billie Rinaldi
>Assignee: Billie Rinaldi
>Priority: Blocker
> Fix For: 3.1.0
>
> Attachments: YARN-7940.01.patch
>
>
> There is a bug in the RegistrySecurity utility class that is causing the ZK 
> sasl client to be misconfigured. This results in NoAuth after the Service AM 
> successfully creates the first node with sasl ACLs for the user.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7732) Support Generic AM Simulator from SynthGenerator

2018-02-20 Thread Young Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16370648#comment-16370648
 ] 

Young Chen edited comment on YARN-7732 at 2/20/18 10:13 PM:


ASF license exclusions added, thanks [~curino] and [~leftnoteasy]!


was (Author: youchen):
ASF license exclusions added.

> Support Generic AM Simulator from SynthGenerator
> 
>
> Key: YARN-7732
> URL: https://issues.apache.org/jira/browse/YARN-7732
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler-load-simulator
>Reporter: Young Chen
>Assignee: Young Chen
>Priority: Minor
> Attachments: YARN-7732-YARN-7798.01.patch, 
> YARN-7732-YARN-7798.02.patch, YARN-7732.01.patch, YARN-7732.02.patch, 
> YARN-7732.03.patch, YARN-7732.04.patch, YARN-7732.05.patch, YARN-7732.06.patch
>
>
> Extract the MapReduce specific set-up in the SLSRunner into the 
> MRAMSimulator, and enable support for pluggable AMSimulators.
> Previously, the AM set up in SLSRunner had the MRAMSimulator type hard coded, 
> for example startAMFromSynthGenerator() calls this:
>  
> {code:java}
> runNewAM(SLSUtils.DEFAULT_JOB_TYPE, user, jobQueue, oldJobId,
> jobStartTimeMS, jobFinishTimeMS, containerList, reservationId,
> job.getDeadline(), getAMContainerResource(null));
> {code}
> where SLSUtils.DEFAULT_JOB_TYPE = "mapreduce"
> The container set up was also only suitable for mapreduce: 
>  
> {code:java}
> Version:1.0 StartHTML:00286 EndHTML:12564 StartFragment:03634 
> EndFragment:12474 StartSelection:03700 EndSelection:12464 
> SourceURL:https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/SLSRunner.java
>  
> // map tasks
> for (int i = 0; i < job.getNumberMaps(); i++) {
>   TaskAttemptInfo tai = job.getTaskAttemptInfo(TaskType.MAP, i, 0);
>   RMNode node =
>   nmMap.get(keyAsArray.get(rand.nextInt(keyAsArray.size(
>   .getNode();
>   String hostname = "/" + node.getRackName() + "/" + node.getHostName();
>   long containerLifeTime = tai.getRuntime();
>   Resource containerResource =
>   Resource.newInstance((int) tai.getTaskInfo().getTaskMemory(),
>   (int) tai.getTaskInfo().getTaskVCores());
>   containerList.add(new ContainerSimulator(containerResource,
>   containerLifeTime, hostname, DEFAULT_MAPPER_PRIORITY, "map"));
> }
> // reduce tasks
> for (int i = 0; i < job.getNumberReduces(); i++) {
>   TaskAttemptInfo tai = job.getTaskAttemptInfo(TaskType.REDUCE, i, 0);
>   RMNode node =
>   nmMap.get(keyAsArray.get(rand.nextInt(keyAsArray.size(
>   .getNode();
>   String hostname = "/" + node.getRackName() + "/" + node.getHostName();
>   long containerLifeTime = tai.getRuntime();
>   Resource containerResource =
>   Resource.newInstance((int) tai.getTaskInfo().getTaskMemory(),
>   (int) tai.getTaskInfo().getTaskVCores());
>   containerList.add(
>   new ContainerSimulator(containerResource, containerLifeTime,
>   hostname, DEFAULT_REDUCER_PRIORITY, "reduce"));
> }
> {code}
>  
> In addition, the syn.json format supported only mapreduce (the parameters 
> were very specific: mtime, rtime, mtasks, rtasks, etc..).
> This patch aims to introduce a new syn.json format that can describe generic 
> jobs, and the SLS setup required to support the synth generation of generic 
> jobs.
> See syn_generic.json for an equivalent of the previous syn.json in the new 
> format.
> Using the new generic format, we describe a StreamAMSimulator simulates a 
> long running streaming service that maintains N number of containers for the 
> lifetime of the AM. See syn_stream.json.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7732) Support Generic AM Simulator from SynthGenerator

2018-02-20 Thread Young Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Young Chen updated YARN-7732:
-
Attachment: YARN-7732.06.patch

> Support Generic AM Simulator from SynthGenerator
> 
>
> Key: YARN-7732
> URL: https://issues.apache.org/jira/browse/YARN-7732
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler-load-simulator
>Reporter: Young Chen
>Assignee: Young Chen
>Priority: Minor
> Attachments: YARN-7732-YARN-7798.01.patch, 
> YARN-7732-YARN-7798.02.patch, YARN-7732.01.patch, YARN-7732.02.patch, 
> YARN-7732.03.patch, YARN-7732.04.patch, YARN-7732.05.patch, YARN-7732.06.patch
>
>
> Extract the MapReduce specific set-up in the SLSRunner into the 
> MRAMSimulator, and enable support for pluggable AMSimulators.
> Previously, the AM set up in SLSRunner had the MRAMSimulator type hard coded, 
> for example startAMFromSynthGenerator() calls this:
>  
> {code:java}
> runNewAM(SLSUtils.DEFAULT_JOB_TYPE, user, jobQueue, oldJobId,
> jobStartTimeMS, jobFinishTimeMS, containerList, reservationId,
> job.getDeadline(), getAMContainerResource(null));
> {code}
> where SLSUtils.DEFAULT_JOB_TYPE = "mapreduce"
> The container set up was also only suitable for mapreduce: 
>  
> {code:java}
> Version:1.0 StartHTML:00286 EndHTML:12564 StartFragment:03634 
> EndFragment:12474 StartSelection:03700 EndSelection:12464 
> SourceURL:https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/SLSRunner.java
>  
> // map tasks
> for (int i = 0; i < job.getNumberMaps(); i++) {
>   TaskAttemptInfo tai = job.getTaskAttemptInfo(TaskType.MAP, i, 0);
>   RMNode node =
>   nmMap.get(keyAsArray.get(rand.nextInt(keyAsArray.size(
>   .getNode();
>   String hostname = "/" + node.getRackName() + "/" + node.getHostName();
>   long containerLifeTime = tai.getRuntime();
>   Resource containerResource =
>   Resource.newInstance((int) tai.getTaskInfo().getTaskMemory(),
>   (int) tai.getTaskInfo().getTaskVCores());
>   containerList.add(new ContainerSimulator(containerResource,
>   containerLifeTime, hostname, DEFAULT_MAPPER_PRIORITY, "map"));
> }
> // reduce tasks
> for (int i = 0; i < job.getNumberReduces(); i++) {
>   TaskAttemptInfo tai = job.getTaskAttemptInfo(TaskType.REDUCE, i, 0);
>   RMNode node =
>   nmMap.get(keyAsArray.get(rand.nextInt(keyAsArray.size(
>   .getNode();
>   String hostname = "/" + node.getRackName() + "/" + node.getHostName();
>   long containerLifeTime = tai.getRuntime();
>   Resource containerResource =
>   Resource.newInstance((int) tai.getTaskInfo().getTaskMemory(),
>   (int) tai.getTaskInfo().getTaskVCores());
>   containerList.add(
>   new ContainerSimulator(containerResource, containerLifeTime,
>   hostname, DEFAULT_REDUCER_PRIORITY, "reduce"));
> }
> {code}
>  
> In addition, the syn.json format supported only mapreduce (the parameters 
> were very specific: mtime, rtime, mtasks, rtasks, etc..).
> This patch aims to introduce a new syn.json format that can describe generic 
> jobs, and the SLS setup required to support the synth generation of generic 
> jobs.
> See syn_generic.json for an equivalent of the previous syn.json in the new 
> format.
> Using the new generic format, we describe a StreamAMSimulator simulates a 
> long running streaming service that maintains N number of containers for the 
> lifetime of the AM. See syn_stream.json.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7732) Support Generic AM Simulator from SynthGenerator

2018-02-20 Thread Young Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16370648#comment-16370648
 ] 

Young Chen commented on YARN-7732:
--

ASF license exclusions added.

> Support Generic AM Simulator from SynthGenerator
> 
>
> Key: YARN-7732
> URL: https://issues.apache.org/jira/browse/YARN-7732
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler-load-simulator
>Reporter: Young Chen
>Assignee: Young Chen
>Priority: Minor
> Attachments: YARN-7732-YARN-7798.01.patch, 
> YARN-7732-YARN-7798.02.patch, YARN-7732.01.patch, YARN-7732.02.patch, 
> YARN-7732.03.patch, YARN-7732.04.patch, YARN-7732.05.patch, YARN-7732.06.patch
>
>
> Extract the MapReduce specific set-up in the SLSRunner into the 
> MRAMSimulator, and enable support for pluggable AMSimulators.
> Previously, the AM set up in SLSRunner had the MRAMSimulator type hard coded, 
> for example startAMFromSynthGenerator() calls this:
>  
> {code:java}
> runNewAM(SLSUtils.DEFAULT_JOB_TYPE, user, jobQueue, oldJobId,
> jobStartTimeMS, jobFinishTimeMS, containerList, reservationId,
> job.getDeadline(), getAMContainerResource(null));
> {code}
> where SLSUtils.DEFAULT_JOB_TYPE = "mapreduce"
> The container set up was also only suitable for mapreduce: 
>  
> {code:java}
> Version:1.0 StartHTML:00286 EndHTML:12564 StartFragment:03634 
> EndFragment:12474 StartSelection:03700 EndSelection:12464 
> SourceURL:https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/SLSRunner.java
>  
> // map tasks
> for (int i = 0; i < job.getNumberMaps(); i++) {
>   TaskAttemptInfo tai = job.getTaskAttemptInfo(TaskType.MAP, i, 0);
>   RMNode node =
>   nmMap.get(keyAsArray.get(rand.nextInt(keyAsArray.size(
>   .getNode();
>   String hostname = "/" + node.getRackName() + "/" + node.getHostName();
>   long containerLifeTime = tai.getRuntime();
>   Resource containerResource =
>   Resource.newInstance((int) tai.getTaskInfo().getTaskMemory(),
>   (int) tai.getTaskInfo().getTaskVCores());
>   containerList.add(new ContainerSimulator(containerResource,
>   containerLifeTime, hostname, DEFAULT_MAPPER_PRIORITY, "map"));
> }
> // reduce tasks
> for (int i = 0; i < job.getNumberReduces(); i++) {
>   TaskAttemptInfo tai = job.getTaskAttemptInfo(TaskType.REDUCE, i, 0);
>   RMNode node =
>   nmMap.get(keyAsArray.get(rand.nextInt(keyAsArray.size(
>   .getNode();
>   String hostname = "/" + node.getRackName() + "/" + node.getHostName();
>   long containerLifeTime = tai.getRuntime();
>   Resource containerResource =
>   Resource.newInstance((int) tai.getTaskInfo().getTaskMemory(),
>   (int) tai.getTaskInfo().getTaskVCores());
>   containerList.add(
>   new ContainerSimulator(containerResource, containerLifeTime,
>   hostname, DEFAULT_REDUCER_PRIORITY, "reduce"));
> }
> {code}
>  
> In addition, the syn.json format supported only mapreduce (the parameters 
> were very specific: mtime, rtime, mtasks, rtasks, etc..).
> This patch aims to introduce a new syn.json format that can describe generic 
> jobs, and the SLS setup required to support the synth generation of generic 
> jobs.
> See syn_generic.json for an equivalent of the previous syn.json in the new 
> format.
> Using the new generic format, we describe a StreamAMSimulator simulates a 
> long running streaming service that maintains N number of containers for the 
> lifetime of the AM. See syn_stream.json.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-7346) Fix compilation errors against hbase2 beta release

2018-02-20 Thread Vrushali C (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7346?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vrushali C reassigned YARN-7346:


Assignee: Haibo Chen  (was: Vrushali C)

> Fix compilation errors against hbase2 beta release
> --
>
> Key: YARN-7346
> URL: https://issues.apache.org/jira/browse/YARN-7346
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Ted Yu
>Assignee: Haibo Chen
>Priority: Major
> Attachments: YARN-7346.00.patch, YARN-7346.01.patch, 
> YARN-7346.02.patch, YARN-7346.03-incremental.patch, YARN-7346.03.patch, 
> YARN-7346.04.patch, YARN-7346.prelim1.patch, YARN-7346.prelim2.patch, 
> YARN-7581.prelim.patch
>
>
> When compiling hadoop-yarn-server-timelineservice-hbase against 2.0.0-alpha3, 
> I got the following errors:
> https://pastebin.com/Ms4jYEVB
> This issue is to fix the compilation errors.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7346) Fix compilation errors against hbase2 beta release

2018-02-20 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16370629#comment-16370629
 ] 

Vrushali C commented on YARN-7346:
--

I am leaning strongly towards option #2 so that we would know if compilation 
breaks between patches.

One question (or thought), do you think, any hdfs or hadoop-common change on 
trunk could cause compilation issues for hbase-2.x profile? In that case, would 
those patches have a problem in getting committed?

Completely agree that for test, default profile can be set to hbase1 by 
default. Also, by default, hadoop releases to be built for hbase-1. 

 

 

> Fix compilation errors against hbase2 beta release
> --
>
> Key: YARN-7346
> URL: https://issues.apache.org/jira/browse/YARN-7346
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Ted Yu
>Assignee: Vrushali C
>Priority: Major
> Attachments: YARN-7346.00.patch, YARN-7346.01.patch, 
> YARN-7346.02.patch, YARN-7346.03-incremental.patch, YARN-7346.03.patch, 
> YARN-7346.04.patch, YARN-7346.prelim1.patch, YARN-7346.prelim2.patch, 
> YARN-7581.prelim.patch
>
>
> When compiling hadoop-yarn-server-timelineservice-hbase against 2.0.0-alpha3, 
> I got the following errors:
> https://pastebin.com/Ms4jYEVB
> This issue is to fix the compilation errors.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7346) Fix compilation errors against hbase2 beta release

2018-02-20 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16370619#comment-16370619
 ] 

Vrushali C commented on YARN-7346:
--

Reading through, will get back 

> Fix compilation errors against hbase2 beta release
> --
>
> Key: YARN-7346
> URL: https://issues.apache.org/jira/browse/YARN-7346
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Ted Yu
>Assignee: Vrushali C
>Priority: Major
> Attachments: YARN-7346.00.patch, YARN-7346.01.patch, 
> YARN-7346.02.patch, YARN-7346.03-incremental.patch, YARN-7346.03.patch, 
> YARN-7346.04.patch, YARN-7346.prelim1.patch, YARN-7346.prelim2.patch, 
> YARN-7581.prelim.patch
>
>
> When compiling hadoop-yarn-server-timelineservice-hbase against 2.0.0-alpha3, 
> I got the following errors:
> https://pastebin.com/Ms4jYEVB
> This issue is to fix the compilation errors.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7732) Support Generic AM Simulator from SynthGenerator

2018-02-20 Thread Carlo Curino (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16370610#comment-16370610
 ] 

Carlo Curino commented on YARN-7732:


Thanks [~leftnoteasy] for  the review. [~youchen] please fix the ASF license 
issue (by adding an exclusion in pom.xml), and I will commit to trunk based on 
Wangda's review (and a quick skim from me).

> Support Generic AM Simulator from SynthGenerator
> 
>
> Key: YARN-7732
> URL: https://issues.apache.org/jira/browse/YARN-7732
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler-load-simulator
>Reporter: Young Chen
>Assignee: Young Chen
>Priority: Minor
> Attachments: YARN-7732-YARN-7798.01.patch, 
> YARN-7732-YARN-7798.02.patch, YARN-7732.01.patch, YARN-7732.02.patch, 
> YARN-7732.03.patch, YARN-7732.04.patch, YARN-7732.05.patch
>
>
> Extract the MapReduce specific set-up in the SLSRunner into the 
> MRAMSimulator, and enable support for pluggable AMSimulators.
> Previously, the AM set up in SLSRunner had the MRAMSimulator type hard coded, 
> for example startAMFromSynthGenerator() calls this:
>  
> {code:java}
> runNewAM(SLSUtils.DEFAULT_JOB_TYPE, user, jobQueue, oldJobId,
> jobStartTimeMS, jobFinishTimeMS, containerList, reservationId,
> job.getDeadline(), getAMContainerResource(null));
> {code}
> where SLSUtils.DEFAULT_JOB_TYPE = "mapreduce"
> The container set up was also only suitable for mapreduce: 
>  
> {code:java}
> Version:1.0 StartHTML:00286 EndHTML:12564 StartFragment:03634 
> EndFragment:12474 StartSelection:03700 EndSelection:12464 
> SourceURL:https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/SLSRunner.java
>  
> // map tasks
> for (int i = 0; i < job.getNumberMaps(); i++) {
>   TaskAttemptInfo tai = job.getTaskAttemptInfo(TaskType.MAP, i, 0);
>   RMNode node =
>   nmMap.get(keyAsArray.get(rand.nextInt(keyAsArray.size(
>   .getNode();
>   String hostname = "/" + node.getRackName() + "/" + node.getHostName();
>   long containerLifeTime = tai.getRuntime();
>   Resource containerResource =
>   Resource.newInstance((int) tai.getTaskInfo().getTaskMemory(),
>   (int) tai.getTaskInfo().getTaskVCores());
>   containerList.add(new ContainerSimulator(containerResource,
>   containerLifeTime, hostname, DEFAULT_MAPPER_PRIORITY, "map"));
> }
> // reduce tasks
> for (int i = 0; i < job.getNumberReduces(); i++) {
>   TaskAttemptInfo tai = job.getTaskAttemptInfo(TaskType.REDUCE, i, 0);
>   RMNode node =
>   nmMap.get(keyAsArray.get(rand.nextInt(keyAsArray.size(
>   .getNode();
>   String hostname = "/" + node.getRackName() + "/" + node.getHostName();
>   long containerLifeTime = tai.getRuntime();
>   Resource containerResource =
>   Resource.newInstance((int) tai.getTaskInfo().getTaskMemory(),
>   (int) tai.getTaskInfo().getTaskVCores());
>   containerList.add(
>   new ContainerSimulator(containerResource, containerLifeTime,
>   hostname, DEFAULT_REDUCER_PRIORITY, "reduce"));
> }
> {code}
>  
> In addition, the syn.json format supported only mapreduce (the parameters 
> were very specific: mtime, rtime, mtasks, rtasks, etc..).
> This patch aims to introduce a new syn.json format that can describe generic 
> jobs, and the SLS setup required to support the synth generation of generic 
> jobs.
> See syn_generic.json for an equivalent of the previous syn.json in the new 
> format.
> Using the new generic format, we describe a StreamAMSimulator simulates a 
> long running streaming service that maintains N number of containers for the 
> lifetime of the AM. See syn_stream.json.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7403) [GQ] Compute global and local "IdealAllocation"

2018-02-20 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16370578#comment-16370578
 ] 

genericqa commented on YARN-7403:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 11 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
53s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 11s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
12s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api in 
trunk has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
49s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
21s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch 
failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  1m 
57s{color} | {color:red} hadoop-yarn in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  1m 57s{color} 
| {color:red} hadoop-yarn in the patch failed. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 52s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 34 new + 212 unchanged - 0 fixed = 246 total (was 212) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
24s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch 
failed. {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 3 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
3s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  3m 
42s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
23s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch 
failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
22s{color} | {color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager
 generated 3 new + 4 unchanged - 0 fixed = 7 total (was 4) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
34s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
11s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 24s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
19s{color} | {color:red} The patch generated 5 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 63m 19s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | 

[jira] [Created] (YARN-7948) Enable refreshing maximum allocation for multiple resource types

2018-02-20 Thread Yufei Gu (JIRA)
Yufei Gu created YARN-7948:
--

 Summary: Enable refreshing maximum allocation for multiple 
resource types
 Key: YARN-7948
 URL: https://issues.apache.org/jira/browse/YARN-7948
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: fairscheduler
Affects Versions: 3.0.0
Reporter: Yufei Gu
Assignee: Yufei Gu


YARN-7738 did the same thing for CS. We need a fix for FS. We could fix it by 
moving the refresh code from class CS to class AbstractYARNScheduler. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7346) Fix compilation errors against hbase2 beta release

2018-02-20 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16370520#comment-16370520
 ] 

Haibo Chen commented on YARN-7346:
--

{quote}Looking into timelinservice-hbase-* code, it actually depends on only 2 
jars i.e hbase-server and hbase-common
{quote}
Can you elaborate on this a bit? I checked the other dependencies, they are 
indeed replied on by timelineservice-hbase-* code.

 

> Fix compilation errors against hbase2 beta release
> --
>
> Key: YARN-7346
> URL: https://issues.apache.org/jira/browse/YARN-7346
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Ted Yu
>Assignee: Vrushali C
>Priority: Major
> Attachments: YARN-7346.00.patch, YARN-7346.01.patch, 
> YARN-7346.02.patch, YARN-7346.03-incremental.patch, YARN-7346.03.patch, 
> YARN-7346.04.patch, YARN-7346.prelim1.patch, YARN-7346.prelim2.patch, 
> YARN-7581.prelim.patch
>
>
> When compiling hadoop-yarn-server-timelineservice-hbase against 2.0.0-alpha3, 
> I got the following errors:
> https://pastebin.com/Ms4jYEVB
> This issue is to fix the compilation errors.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7940) Service AM gets NoAuth with secure ZK

2018-02-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16370512#comment-16370512
 ] 

Hudson commented on YARN-7940:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13685 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13685/])
YARN-7940. Fixed a bug in ServiceAM ZooKeeper initialization.
(eyang: rev 7280c5af82d36a9be15448293210d871f680f55e)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/client/impl/zk/CuratorService.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/client/impl/zk/RegistrySecurity.java


> Service AM gets NoAuth with secure ZK
> -
>
> Key: YARN-7940
> URL: https://issues.apache.org/jira/browse/YARN-7940
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-native-services
>Reporter: Billie Rinaldi
>Assignee: Billie Rinaldi
>Priority: Blocker
> Attachments: YARN-7940.01.patch
>
>
> There is a bug in the RegistrySecurity utility class that is causing the ZK 
> sasl client to be misconfigured. This results in NoAuth after the Service AM 
> successfully creates the first node with sasl ACLs for the user.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7403) [GQ] Compute global and local "IdealAllocation"

2018-02-20 Thread Carlo Curino (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16370506#comment-16370506
 ] 

Carlo Curino commented on YARN-7403:


Fixing TestYarnConfigurationFields unit test failure (the rest still does not 
compile as depends on YARN-7403).

> [GQ] Compute global and local "IdealAllocation"
> ---
>
> Key: YARN-7403
> URL: https://issues.apache.org/jira/browse/YARN-7403
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: federation
>Reporter: Carlo Curino
>Assignee: Carlo Curino
>Priority: Major
> Attachments: YARN-7403.draft.patch, YARN-7403.draft2.patch, 
> YARN-7403.draft3.patch, YARN-7403.v1.patch, YARN-7403.v2.patch, 
> global-queues-preemption.PNG
>
>
> This JIRA tracks algorithmic effort to combine the local queue views of 
> capacity guarantee/use/demand and compute the global ideal allocation, and 
> the respective local allocations. This will inform the RMs in each 
> sub-clusters on how to allocate more containers to each queues (allowing for 
> temporary over/under allocations that are locally excessive, but globally 
> correct).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7403) [GQ] Compute global and local "IdealAllocation"

2018-02-20 Thread Carlo Curino (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carlo Curino updated YARN-7403:
---
Attachment: YARN-7403.v2.patch

> [GQ] Compute global and local "IdealAllocation"
> ---
>
> Key: YARN-7403
> URL: https://issues.apache.org/jira/browse/YARN-7403
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: federation
>Reporter: Carlo Curino
>Assignee: Carlo Curino
>Priority: Major
> Attachments: YARN-7403.draft.patch, YARN-7403.draft2.patch, 
> YARN-7403.draft3.patch, YARN-7403.v1.patch, YARN-7403.v2.patch, 
> global-queues-preemption.PNG
>
>
> This JIRA tracks algorithmic effort to combine the local queue views of 
> capacity guarantee/use/demand and compute the global ideal allocation, and 
> the respective local allocations. This will inform the RMs in each 
> sub-clusters on how to allocate more containers to each queues (allowing for 
> temporary over/under allocations that are locally excessive, but globally 
> correct).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5015) Unify restart policies across AM and container restarts

2018-02-20 Thread Chandni Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16370486#comment-16370486
 ] 

Chandni Singh commented on YARN-5015:
-

[~jianhe] [~billie.rinaldi] [~leftnoteasy] Can you please review the patch 3?

> Unify restart policies across AM and container restarts
> ---
>
> Key: YARN-5015
> URL: https://issues.apache.org/jira/browse/YARN-5015
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Reporter: Varun Vasudev
>Assignee: Chandni Singh
>Priority: Major
>  Labels: oct16-medium
> Attachments: YARN-5015.01.patch, YARN-5015.02.patch, 
> YARN-5015.03.patch
>
>
> We support AM restart and container restarts - however the two have slightly 
> different capabilities. We should unify them. There's no reason for them to 
> be different.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7346) Fix compilation errors against hbase2 beta release

2018-02-20 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16370471#comment-16370471
 ] 

Rohith Sharma K S commented on YARN-7346:
-

Looking into timelinservice-hbase-* code, it actually depends on only 2 jars 
i.e hbase-server and hbase-common. Rest of the dependencies we can remove from 
pom.xml. Considering this, I would recommend for option-2 which I mentioned 
that ensures we don't break anything across the different versions of hbase. 
Does it make sense?

> Fix compilation errors against hbase2 beta release
> --
>
> Key: YARN-7346
> URL: https://issues.apache.org/jira/browse/YARN-7346
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Ted Yu
>Assignee: Vrushali C
>Priority: Major
> Attachments: YARN-7346.00.patch, YARN-7346.01.patch, 
> YARN-7346.02.patch, YARN-7346.03-incremental.patch, YARN-7346.03.patch, 
> YARN-7346.04.patch, YARN-7346.prelim1.patch, YARN-7346.prelim2.patch, 
> YARN-7581.prelim.patch
>
>
> When compiling hadoop-yarn-server-timelineservice-hbase against 2.0.0-alpha3, 
> I got the following errors:
> https://pastebin.com/Ms4jYEVB
> This issue is to fix the compilation errors.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7346) Fix compilation errors against hbase2 beta release

2018-02-20 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16370445#comment-16370445
 ] 

Rohith Sharma K S commented on YARN-7346:
-

[~vrushalic] do you have any suggestion for options?

> Fix compilation errors against hbase2 beta release
> --
>
> Key: YARN-7346
> URL: https://issues.apache.org/jira/browse/YARN-7346
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Ted Yu
>Assignee: Vrushali C
>Priority: Major
> Attachments: YARN-7346.00.patch, YARN-7346.01.patch, 
> YARN-7346.02.patch, YARN-7346.03-incremental.patch, YARN-7346.03.patch, 
> YARN-7346.04.patch, YARN-7346.prelim1.patch, YARN-7346.prelim2.patch, 
> YARN-7581.prelim.patch
>
>
> When compiling hadoop-yarn-server-timelineservice-hbase against 2.0.0-alpha3, 
> I got the following errors:
> https://pastebin.com/Ms4jYEVB
> This issue is to fix the compilation errors.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7346) Fix compilation errors against hbase2 beta release

2018-02-20 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16370445#comment-16370445
 ] 

Rohith Sharma K S edited comment on YARN-7346 at 2/20/18 6:51 PM:
--

[~vrushalic] do you have any suggestion for above mentioned options?


was (Author: rohithsharma):
[~vrushalic] do you have any suggestion for options?

> Fix compilation errors against hbase2 beta release
> --
>
> Key: YARN-7346
> URL: https://issues.apache.org/jira/browse/YARN-7346
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Ted Yu
>Assignee: Vrushali C
>Priority: Major
> Attachments: YARN-7346.00.patch, YARN-7346.01.patch, 
> YARN-7346.02.patch, YARN-7346.03-incremental.patch, YARN-7346.03.patch, 
> YARN-7346.04.patch, YARN-7346.prelim1.patch, YARN-7346.prelim2.patch, 
> YARN-7581.prelim.patch
>
>
> When compiling hadoop-yarn-server-timelineservice-hbase against 2.0.0-alpha3, 
> I got the following errors:
> https://pastebin.com/Ms4jYEVB
> This issue is to fix the compilation errors.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7346) Fix compilation errors against hbase2 beta release

2018-02-20 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16370444#comment-16370444
 ] 

Rohith Sharma K S commented on YARN-7346:
-

Ahh I see!  I guess src/main/java package is compiled by default if it exist. 
If we want to override that compilation then we can use below property which is 
NOT recommended.
{code}

   true

{code}

However let do following things which I feel that's better
# Option-1 
## Lets activate module hbase-1 vs hbase-2 based on profile is set as we are 
doing in last patch. It compiles only respective modules based on profile is 
set. 
##  Let add true for hbase-2 for default compilation. It 
allows default compilation NOT to happen. This is only for avoiding jenkins not 
to run compile against HBase-2.0 unless profile is activated. 
# Option-2
## Lets compile both hbase-1 and hbase-2 all the time which ensures compilation 
is not broken with patches.
## Include co-process jar in distribution only based on the profile set.  
## For this we need to enable corresponding dependencies  in hbase-2 modules as 
well. 
## For test, default profile can be set to hbase1 by default.

> Fix compilation errors against hbase2 beta release
> --
>
> Key: YARN-7346
> URL: https://issues.apache.org/jira/browse/YARN-7346
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Ted Yu
>Assignee: Vrushali C
>Priority: Major
> Attachments: YARN-7346.00.patch, YARN-7346.01.patch, 
> YARN-7346.02.patch, YARN-7346.03-incremental.patch, YARN-7346.03.patch, 
> YARN-7346.04.patch, YARN-7346.prelim1.patch, YARN-7346.prelim2.patch, 
> YARN-7581.prelim.patch
>
>
> When compiling hadoop-yarn-server-timelineservice-hbase against 2.0.0-alpha3, 
> I got the following errors:
> https://pastebin.com/Ms4jYEVB
> This issue is to fix the compilation errors.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7626) Allow regular expression matching in container-executor.cfg for devices and named docker volumes mount

2018-02-20 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16370401#comment-16370401
 ] 

Miklos Szegedi commented on YARN-7626:
--

Indeed there is not there any substantial difference other than saying 
{{regex:/dev/nvidia.*}}. I think this latter is a bit more robust in case we 
try to configure regexes for other purposes in the future. This is just an 
opinion I let you decide.
{quote}what if hackers input user mount like regex+ as a prefix?
{quote}
Regex+ won't be considered valid. What if they put ^.*$? I do not think there 
is a difference at least not from this point of view. But you just raised an 
important point. The regex has to be properly validated.

> Allow regular expression matching in container-executor.cfg for devices and 
> named docker volumes mount
> --
>
> Key: YARN-7626
> URL: https://issues.apache.org/jira/browse/YARN-7626
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Zian Chen
>Assignee: Zian Chen
>Priority: Major
> Attachments: YARN-7626.001.patch, YARN-7626.002.patch, 
> YARN-7626.003.patch, YARN-7626.004.patch, YARN-7626.005.patch, 
> YARN-7626.006.patch, YARN-7626.007.patch, YARN-7626.008.patch
>
>
> Currently when we config some of the GPU devices related fields (like ) in 
> container-executor.cfg, these fields are generated based on different driver 
> versions or GPU device names. We want to enable regular expression matching 
> so that user don't need to manually set up these fields when config 
> container-executor.cfg,



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5714) ContainerExecutor does not order environment map

2018-02-20 Thread Billie Rinaldi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16370382#comment-16370382
 ] 

Billie Rinaldi commented on YARN-5714:
--

It's true, the preserving order approach would require frameworks to opt in by 
providing a map that preserves order. I agree we shouldn't change the API to 
force frameworks to opt in, but I wouldn't have a problem with frameworks 
needing to make a change if they want to preserve order. On the other hand, I'm 
not against detecting dependencies if we decide we want to go that way.

> ContainerExecutor does not order environment map
> 
>
> Key: YARN-5714
> URL: https://issues.apache.org/jira/browse/YARN-5714
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.4.1, 2.5.2, 2.7.3, 2.6.4, 3.0.0-alpha1
> Environment: all (linux and windows alike)
>Reporter: Remi Catherinot
>Assignee: Remi Catherinot
>Priority: Trivial
>  Labels: oct16-medium
> Attachments: YARN-5714.001.patch, YARN-5714.002.patch, 
> YARN-5714.003.patch, YARN-5714.004.patch, YARN-5714.005.patch, 
> YARN-5714.006.patch
>
>   Original Estimate: 120h
>  Remaining Estimate: 120h
>
> when dumping the launch container script, environment variables are dumped 
> based on the order internally used by the map implementation (hash based). It 
> does not take into consideration that some env varibales may refer each 
> other, and so that some env variables must be declared before those 
> referencing them.
> In my case, i ended up having LD_LIBRARY_PATH which was depending on 
> HADOOP_COMMON_HOME being dumped before HADOOP_COMMON_HOME. Thus it had a 
> wrong value and so native libraries weren't loaded. jobs were running but not 
> at their best efficiency. This is just a use case falling into that bug, but 
> i'm sure others may happen as well.
> I already have a patch running in my production environment, i just estimate 
> to 5 days for packaging the patch in the right fashion for JIRA + try my best 
> to add tests.
> Note : the patch is not OS aware with a default empty implementation. I will 
> only implement the unix version on a 1st release. I'm not used to windows env 
> variables syntax so it will take me more time/research for it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5028) RMStateStore should trim down app state for completed applications

2018-02-20 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16370356#comment-16370356
 ] 

Yufei Gu commented on YARN-5028:


+1 for the last patch. Will commit it later. 

> RMStateStore should trim down app state for completed applications
> --
>
> Key: YARN-5028
> URL: https://issues.apache.org/jira/browse/YARN-5028
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Affects Versions: 2.8.0
>Reporter: Karthik Kambatla
>Assignee: Gergo Repas
>Priority: Major
> Attachments: YARN-5028.000.patch, YARN-5028.001.patch, 
> YARN-5028.002.patch, YARN-5028.003.patch, YARN-5028.004.patch, 
> YARN-5028.005.patch, YARN-5028.006.patch, YARN-5028.007.patch
>
>
> RMStateStore stores enough information to recover applications in case of a 
> restart. The store also retains this information for completed applications 
> to serve their status to REST, WebUI, Java and CLI clients. We don't need all 
> the information we store today to serve application status; for instance, we 
> don't need the {{ApplicationSubmissionContext}}. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7446) Docker container privileged mode and --user flag contradict each other

2018-02-20 Thread Eric Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16370264#comment-16370264
 ] 

Eric Yang commented on YARN-7446:
-

Hi [~ebadger], please check version 3 of the patch.  We are closing in on 3.1.0 
RC this week.  I like to get this one and YARN-7221 to closure.  THanks

> Docker container privileged mode and --user flag contradict each other
> --
>
> Key: YARN-7446
> URL: https://issues.apache.org/jira/browse/YARN-7446
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 3.0.0
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Blocker
> Attachments: YARN-7446.001.patch, YARN-7446.002.patch, 
> YARN-7446.003.patch
>
>
> In the current implementation, when privileged=true, --user flag is also 
> passed to docker for launching container.  In reality, the container has no 
> way to use root privileges unless there is sticky bit or sudoers in the image 
> for the specified user to gain privileges again.  To avoid duplication of 
> dropping and reacquire root privileges, we can reduce the duplication of 
> specifying both flag.  When privileged mode is enabled, --user flag should be 
> omitted.  When non-privileged mode is enabled, --user flag is supplied.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5028) RMStateStore should trim down app state for completed applications

2018-02-20 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16370203#comment-16370203
 ] 

genericqa commented on YARN-5028:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
27s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 50s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 14s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 65m 
59s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}110m 37s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-5028 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12911224/YARN-5028.007.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux c1f290fc82c3 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 8896d20 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/19746/testReport/ |
| Max. process+thread count | 841 (vs. ulimit of 5500) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/19746/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> RMStateStore should trim down app state for completed 

[jira] [Commented] (YARN-7346) Fix compilation errors against hbase2 beta release

2018-02-20 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16370169#comment-16370169
 ] 

Haibo Chen commented on YARN-7346:
--

Thanks [~rohithsharma] for the review! I cannot seem to get it working for 
hbase-server-2 with the following
{code:java}

    
  default
  
    true
  
  
    
  
    maven-compiler-plugin
    
  true
    
  
    
  
    

    
  hbase2
  
    false
    
  hbase.profile
  2.0
    
  
  
    
  org.apache.hadoop
  
hadoop-yarn-server-timelineservice-hbase-common
    

    
  org.slf4j
  slf4j-api
    

    
  com.google.guava
  guava
    

    
  org.apache.hadoop
  hadoop-annotations
  provided
    

    
  org.apache.hadoop
  hadoop-common
  provided
    

    
  org.apache.hadoop
  hadoop-yarn-api
  provided
    

    
  org.apache.hbase
  hbase-common
  
    
  org.apache.hadoop
  hadoop-mapreduce-client-core
    
    
  org.mortbay.jetty
  jetty-util
    
  
    

    
  org.apache.hbase
  hbase-client
  
    
  org.apache.hadoop
  hadoop-mapreduce-client-core
    
  
    

    
    
  org.jruby.jcodings
  jcodings
    

    
  org.apache.hbase
  hbase-server
  provided
  
    
  org.apache.hadoop
  hadoop-hdfs
    
    
  org.apache.hadoop
  hadoop-hdfs-client
    
    
  org.apache.hadoop
  hadoop-client
    
    
  org.apache.hadoop
  hadoop-mapreduce-client-core
    
    
  org.mortbay.jetty
  jetty
    
    
  org.mortbay.jetty
  jetty-util
    
    
  org.mortbay.jetty
  jetty-sslengine
    
  
    
  

  
    
  
    maven-assembly-plugin
    
  
    create-coprocessor-jar
    prepare-package
    
  single
    
    
  src/assembly/coprocessor.xml
  true
    
  
    
  
    
  
    
  {code}
maven would still try to compile hbase-server-2 module even though the 
hbase.profile is not set to 2.0. Any suggestion?
{quote}hadoop-project/pom.xml has jcodings by default. This can be removed 
since hbase2 profile has this dependency explicitly
{quote}
The jcoding is added in the dependencyManagement section, which is not the same 
as the dependency declaration in hbase-server-2 module. The best I can do, is 
to put this in the dependencyManagement section that we can add to the hbase2 
profile in hadoop-project.

 

 

> Fix compilation errors against hbase2 beta release
> --
>
> Key: YARN-7346
> URL: https://issues.apache.org/jira/browse/YARN-7346
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Ted Yu
>Assignee: Vrushali C
>Priority: Major
> Attachments: YARN-7346.00.patch, YARN-7346.01.patch, 
> YARN-7346.02.patch, YARN-7346.03-incremental.patch, YARN-7346.03.patch, 
> YARN-7346.04.patch, YARN-7346.prelim1.patch, YARN-7346.prelim2.patch, 
> YARN-7581.prelim.patch
>
>
> When compiling hadoop-yarn-server-timelineservice-hbase against 2.0.0-alpha3, 
> I got the following errors:
> https://pastebin.com/Ms4jYEVB
> This issue is to fix the compilation errors.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5028) RMStateStore should trim down app state for completed applications

2018-02-20 Thread Gergo Repas (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16370093#comment-16370093
 ] 

Gergo Repas commented on YARN-5028:
---

007.patch: I addressed the checkstyle issue, locally ran all unit tests in the 
RM (Tests run: 2209, Failures: 0, Errors: 0, Skipped: 7). I also tested on a 
real cluster, and recovery worked with new trimmed-down application state.

> RMStateStore should trim down app state for completed applications
> --
>
> Key: YARN-5028
> URL: https://issues.apache.org/jira/browse/YARN-5028
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Affects Versions: 2.8.0
>Reporter: Karthik Kambatla
>Assignee: Gergo Repas
>Priority: Major
> Attachments: YARN-5028.000.patch, YARN-5028.001.patch, 
> YARN-5028.002.patch, YARN-5028.003.patch, YARN-5028.004.patch, 
> YARN-5028.005.patch, YARN-5028.006.patch, YARN-5028.007.patch
>
>
> RMStateStore stores enough information to recover applications in case of a 
> restart. The store also retains this information for completed applications 
> to serve their status to REST, WebUI, Java and CLI clients. We don't need all 
> the information we store today to serve application status; for instance, we 
> don't need the {{ApplicationSubmissionContext}}. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5028) RMStateStore should trim down app state for completed applications

2018-02-20 Thread Gergo Repas (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5028?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gergo Repas updated YARN-5028:
--
Attachment: YARN-5028.007.patch

> RMStateStore should trim down app state for completed applications
> --
>
> Key: YARN-5028
> URL: https://issues.apache.org/jira/browse/YARN-5028
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Affects Versions: 2.8.0
>Reporter: Karthik Kambatla
>Assignee: Gergo Repas
>Priority: Major
> Attachments: YARN-5028.000.patch, YARN-5028.001.patch, 
> YARN-5028.002.patch, YARN-5028.003.patch, YARN-5028.004.patch, 
> YARN-5028.005.patch, YARN-5028.006.patch, YARN-5028.007.patch
>
>
> RMStateStore stores enough information to recover applications in case of a 
> restart. The store also retains this information for completed applications 
> to serve their status to REST, WebUI, Java and CLI clients. We don't need all 
> the information we store today to serve application status; for instance, we 
> don't need the {{ApplicationSubmissionContext}}. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7947) Capacity Scheduler intra-queue preemption can NPE for non-schedulable apps

2018-02-20 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7947?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16369973#comment-16369973
 ] 

Sunil G commented on YARN-7947:
---

Thanks [~eepayne] for the finding and the patch. 

Looks good to me, committing later today if no objections.

> Capacity Scheduler intra-queue preemption can NPE for non-schedulable apps
> --
>
> Key: YARN-7947
> URL: https://issues.apache.org/jira/browse/YARN-7947
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler, scheduler preemption
>Affects Versions: 2.9.0, 2.8.3, 3.0.0, 3.1.0
>Reporter: Eric Payne
>Assignee: Eric Payne
>Priority: Major
> Attachments: YARN-7947.001.patch
>
>
> Intra-queue preemption policy can cause NPE for pending users with no 
> schedulable apps.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7346) Fix compilation errors against hbase2 beta release

2018-02-20 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16369901#comment-16369901
 ] 

Rohith Sharma K S commented on YARN-7346:
-

Couple of additional comments
# hadoop-project/pom.xml has jcodings by default. This can be removed since 
hbase2 profile has this dependency explicitly. 
# For packaging issue which discussed in previously comment, we can use 
property to name the module structure. There are 2 modules i.e 
_hadoop-yarn-server-timelineservice-hbase-server-1_ and 
_hadoop-yarn-server-timelineservice-hbase-server-2_ where one module will be 
activated at any given point of time. So we can use module structure like 
_hadoop-yarn-server-timelineservice-hbase-server-*${hbase.profile.version}*_ in 
hadoop-yarn-dist.xml assembly file. And need to add _hbase.profile.version_ 
property along with hbase.version in hadoop-project#pom.xml
# Given 2nd option works, hadoop-yarn-dist.xml has commented modifications. 
This could be removed. 
# Nits : Below names can be consistent either without space or with space 
between Timeline Service vs TimelineService!
{code}
[INFO] Apache Hadoop YARN TimelineService HBase Backend ... SUCCESS [  0.004 s]
[INFO] Apache Hadoop YARN TimelineService HBase Common  SUCCESS [  0.045 s]
[INFO] Apache Hadoop YARN TimelineService HBase Client  SUCCESS [  0.032 s]
[INFO] Apache Hadoop YARN Timeline Service HBase Servers .. SUCCESS [  0.002 s]
[INFO] Apache Hadoop YARN TimelineService HBase Server 1.2  SUCCESS [  0.014 s]
[INFO] Apache Hadoop YARN Timeline Service HBase tests  SUCCESS [  0.015 s]
{code}

> Fix compilation errors against hbase2 beta release
> --
>
> Key: YARN-7346
> URL: https://issues.apache.org/jira/browse/YARN-7346
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Ted Yu
>Assignee: Vrushali C
>Priority: Major
> Attachments: YARN-7346.00.patch, YARN-7346.01.patch, 
> YARN-7346.02.patch, YARN-7346.03-incremental.patch, YARN-7346.03.patch, 
> YARN-7346.04.patch, YARN-7346.prelim1.patch, YARN-7346.prelim2.patch, 
> YARN-7581.prelim.patch
>
>
> When compiling hadoop-yarn-server-timelineservice-hbase against 2.0.0-alpha3, 
> I got the following errors:
> https://pastebin.com/Ms4jYEVB
> This issue is to fix the compilation errors.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org