[jira] [Commented] (YARN-10341) Yarn Service Container Completed event doesn't get processed

2020-07-07 Thread Billie Rinaldi (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10341?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17152870#comment-17152870
 ] 

Billie Rinaldi commented on YARN-10341:
---

I agree, continue looks better here.

> Yarn Service Container Completed event doesn't get processed 
> -
>
> Key: YARN-10341
> URL: https://issues.apache.org/jira/browse/YARN-10341
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bilwa S T
>Assignee: Bilwa S T
>Priority: Critical
> Attachments: YARN-10341.001.patch
>
>
> If there 10 workers running and if containers get killed , after a while we 
> see that there are just 9 workers runnning. This is due to CONTAINER 
> COMPLETED Event is not processed on AM side. 
> Issue is in below code:
> {code:java}
> public void onContainersCompleted(List statuses) {
>   for (ContainerStatus status : statuses) {
> ContainerId containerId = status.getContainerId();
> ComponentInstance instance = 
> liveInstances.get(status.getContainerId());
> if (instance == null) {
>   LOG.warn(
>   "Container {} Completed. No component instance exists. 
> exitStatus={}. diagnostics={} ",
>   containerId, status.getExitStatus(), status.getDiagnostics());
>   return;
> }
> ComponentEvent event =
> new ComponentEvent(instance.getCompName(), CONTAINER_COMPLETED)
> .setStatus(status).setInstance(instance)
> .setContainerId(containerId);
> dispatcher.getEventHandler().handle(event);
>   }
> {code}
> If component instance doesnt exist for a container, it doesnt iterate over 
> other containers as its returning from method



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8286) Inform AM of container relaunch

2020-02-26 Thread Billie Rinaldi (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-8286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Billie Rinaldi updated YARN-8286:
-
Target Version/s: 3.4.0  (was: 3.3.0)

> Inform AM of container relaunch
> ---
>
> Key: YARN-8286
> URL: https://issues.apache.org/jira/browse/YARN-8286
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Billie Rinaldi
>Assignee: Adam Antal
>Priority: Critical
>
> The AM may need to perform actions when a container has been relaunched. For 
> example, the service AM would want to change the state it has recorded for 
> the container and retrieve new container status for the container, in case 
> the container IP has changed. (The NM would also need to remove the IP it has 
> stored for the container, so container status calls don't return an IP for a 
> container that is not currently running.)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9718) Yarn REST API, services endpoint remote command ejection

2019-09-05 Thread Billie Rinaldi (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-9718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16923738#comment-16923738
 ] 

Billie Rinaldi commented on YARN-9718:
--

Very minor conflict with branch-3.1, so I uploaded a patch to check for a clean 
build.

> Yarn REST API, services endpoint remote command ejection
> 
>
> Key: YARN-9718
> URL: https://issues.apache.org/jira/browse/YARN-9718
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.1.0, 3.2.0, 3.1.1, 3.1.2
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: YARN-9718-branch-3.1.01.patch, YARN-9718.001.patch, 
> YARN-9718.002.patch, YARN-9718.003.patch, YARN-9718.004.patch
>
>
> Email from Oskars Vegeris:
>  
> During internal infrastructure testing it was discovered that the Hadoop Yarn 
> REST endpoint /app/v1/services contains a command injection vulnerability.
>  
> The services endpoint's normal use-case is for launching containers (e.g. 
> Docker images/apps), however by providing an argument with special shell 
> characters it is possible to execute arbitrary commands on the Host server - 
> this would allow to escalate privileges and access. 
>  
> The command injection is possible in the parameter for JVM options - 
> "yarn.service.am.java.opts". It's possible to enter arbitrary shell commands 
> by using sub-shell syntax `cmd` or $(cmd). No shell character filtering is 
> performed. 
>  
> The "launch_command" which needs to be provided is meant for the container 
> and if it's not being run in privileged mode or with special options, host OS 
> should not be accessible.
>  
> I've attached a minimal request sample with an injected 'ping' command. The 
> endpoint can also be found via UI @ 
> [http://yarn-resource-manager:8088/ui2/#/yarn-services]
>  
> If no auth, or "simple auth" (username) is enabled, commands can be executed 
> on the host OS. I know commands can also be ran by the "new-application" 
> feature, however this is clearly not meant to be a way to touch the host OS.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9718) Yarn REST API, services endpoint remote command ejection

2019-09-05 Thread Billie Rinaldi (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-9718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Billie Rinaldi updated YARN-9718:
-
Attachment: YARN-9718-branch-3.1.01.patch

> Yarn REST API, services endpoint remote command ejection
> 
>
> Key: YARN-9718
> URL: https://issues.apache.org/jira/browse/YARN-9718
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.1.0, 3.2.0, 3.1.1, 3.1.2
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: YARN-9718-branch-3.1.01.patch, YARN-9718.001.patch, 
> YARN-9718.002.patch, YARN-9718.003.patch, YARN-9718.004.patch
>
>
> Email from Oskars Vegeris:
>  
> During internal infrastructure testing it was discovered that the Hadoop Yarn 
> REST endpoint /app/v1/services contains a command injection vulnerability.
>  
> The services endpoint's normal use-case is for launching containers (e.g. 
> Docker images/apps), however by providing an argument with special shell 
> characters it is possible to execute arbitrary commands on the Host server - 
> this would allow to escalate privileges and access. 
>  
> The command injection is possible in the parameter for JVM options - 
> "yarn.service.am.java.opts". It's possible to enter arbitrary shell commands 
> by using sub-shell syntax `cmd` or $(cmd). No shell character filtering is 
> performed. 
>  
> The "launch_command" which needs to be provided is meant for the container 
> and if it's not being run in privileged mode or with special options, host OS 
> should not be accessible.
>  
> I've attached a minimal request sample with an injected 'ping' command. The 
> endpoint can also be found via UI @ 
> [http://yarn-resource-manager:8088/ui2/#/yarn-services]
>  
> If no auth, or "simple auth" (username) is enabled, commands can be executed 
> on the host OS. I know commands can also be ran by the "new-application" 
> feature, however this is clearly not meant to be a way to touch the host OS.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9718) Yarn REST API, services endpoint remote command ejection

2019-09-05 Thread Billie Rinaldi (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-9718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16923703#comment-16923703
 ] 

Billie Rinaldi commented on YARN-9718:
--

+1 for patch 4. Thanks, [~eyang]!

> Yarn REST API, services endpoint remote command ejection
> 
>
> Key: YARN-9718
> URL: https://issues.apache.org/jira/browse/YARN-9718
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.1.0, 3.2.0, 3.1.1, 3.1.2
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: YARN-9718.001.patch, YARN-9718.002.patch, 
> YARN-9718.003.patch, YARN-9718.004.patch
>
>
> Email from Oskars Vegeris:
>  
> During internal infrastructure testing it was discovered that the Hadoop Yarn 
> REST endpoint /app/v1/services contains a command injection vulnerability.
>  
> The services endpoint's normal use-case is for launching containers (e.g. 
> Docker images/apps), however by providing an argument with special shell 
> characters it is possible to execute arbitrary commands on the Host server - 
> this would allow to escalate privileges and access. 
>  
> The command injection is possible in the parameter for JVM options - 
> "yarn.service.am.java.opts". It's possible to enter arbitrary shell commands 
> by using sub-shell syntax `cmd` or $(cmd). No shell character filtering is 
> performed. 
>  
> The "launch_command" which needs to be provided is meant for the container 
> and if it's not being run in privileged mode or with special options, host OS 
> should not be accessible.
>  
> I've attached a minimal request sample with an injected 'ping' command. The 
> endpoint can also be found via UI @ 
> [http://yarn-resource-manager:8088/ui2/#/yarn-services]
>  
> If no auth, or "simple auth" (username) is enabled, commands can be executed 
> on the host OS. I know commands can also be ran by the "new-application" 
> feature, however this is clearly not meant to be a way to touch the host OS.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9701) Yarn service cli commands do not connect to ssl enabled RM using ssl-client.xml configs

2019-08-12 Thread Billie Rinaldi (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9701?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16905261#comment-16905261
 ] 

Billie Rinaldi commented on YARN-9701:
--

cc [~eyang]

> Yarn service cli commands do not connect to ssl enabled RM using 
> ssl-client.xml configs
> ---
>
> Key: YARN-9701
> URL: https://issues.apache.org/jira/browse/YARN-9701
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-native-services
>Affects Versions: 3.1.0
>Reporter: Tarun Parimi
>Assignee: Tarun Parimi
>Priority: Major
> Attachments: YARN-9701.001.patch, YARN-9701.002.patch
>
>
> Yarn service commands use the yarn service rest api. When ssl is enabled for 
> RM, the yarn service commands fail as they don't read the ssl-client.xml 
> configs to create ssl connection to the rest api.
> This becomes a problem especially for self signed certificates as the 
> truststore location specified at ssl.client.truststore.location is not 
> considered by commands.
> As workaround, we need to import the certificates to the java default cacert 
> for the yarn service commands to work via ssl. It would be more proper if the 
> yarn service commands makes use of the configs at ssl-client.xml instead to 
> configure and create an ssl client connection. This workaround may not even 
> work if there are additional properties configured in ssl-client.xml that are 
> necessary apart from the truststore related properties.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9718) Yarn REST API, services endpoint remote command ejection

2019-08-08 Thread Billie Rinaldi (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16903234#comment-16903234
 ] 

Billie Rinaldi commented on YARN-9718:
--

Thanks for working on this patch, [~eyang]! I see one issue which is that the 
properties that are validated are obtained in a different way than [the JVM 
options are obtained for the 
AM|https://github.com/apache/hadoop/blob/63161cf590d43fe7f6c905946b029d893b774d77/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/client/ServiceClient.java#L1199-L1200].
 It would be best to use this same approach to get the JVM opts property value. 
This looks for the property in the service configuration and in the YARN 
configuration. The current patch is checking the component configuration, which 
is not necessary.
{noformat}
String jvmOpts = YarnServiceConf
.get(YarnServiceConf.JVM_OPTS, "", app.getConfiguration(), conf);
{noformat}




> Yarn REST API, services endpoint remote command ejection
> 
>
> Key: YARN-9718
> URL: https://issues.apache.org/jira/browse/YARN-9718
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.1.0, 3.2.0, 3.1.1, 3.1.2
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: YARN-9718.001.patch, YARN-9718.002.patch
>
>
> Email from Oskars Vegeris:
>  
> During internal infrastructure testing it was discovered that the Hadoop Yarn 
> REST endpoint /app/v1/services contains a command injection vulnerability.
>  
> The services endpoint's normal use-case is for launching containers (e.g. 
> Docker images/apps), however by providing an argument with special shell 
> characters it is possible to execute arbitrary commands on the Host server - 
> this would allow to escalate privileges and access. 
>  
> The command injection is possible in the parameter for JVM options - 
> "yarn.service.am.java.opts". It's possible to enter arbitrary shell commands 
> by using sub-shell syntax `cmd` or $(cmd). No shell character filtering is 
> performed. 
>  
> The "launch_command" which needs to be provided is meant for the container 
> and if it's not being run in privileged mode or with special options, host OS 
> should not be accessible.
>  
> I've attached a minimal request sample with an injected 'ping' command. The 
> endpoint can also be found via UI @ 
> [http://yarn-resource-manager:8088/ui2/#/yarn-services]
>  
> If no auth, or "simple auth" (username) is enabled, commands can be executed 
> on the host OS. I know commands can also be ran by the "new-application" 
> feature, however this is clearly not meant to be a way to touch the host OS.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9613) Avoid remote lookups for RegistryDNS domain

2019-06-12 Thread Billie Rinaldi (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16862481#comment-16862481
 ] 

Billie Rinaldi commented on YARN-9613:
--

Thanks for the comment, [~eyang]. That sounds reasonable.

> Avoid remote lookups for RegistryDNS domain
> ---
>
> Key: YARN-9613
> URL: https://issues.apache.org/jira/browse/YARN-9613
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.1.2
>Reporter: Billie Rinaldi
>Priority: Major
>
> A typical setup for RegistryDNS is for an upstream DNS server to forward DNS 
> queries matching the hadoop.registry.dns.domain-name to RegistryDNS. If the 
> RegistryDNS lookup gets a non-zero DNS RCODE, RegistryDNS performs a remote 
> lookup in upstream DNS servers. For bad queries, this can result in a loop 
> when the upstream DNS server forwards the query back to RegistryDNS.
> To solve this problem, we should avoid performing remote lookups for queries 
> within hadoop.registry.dns.domain-name, which are expected to be handled by 
> RegistryDNS. We may also want to evaluate whether we should add a 
> configuration property that allows the user to disable remote lookups 
> entirely for RegistryDNS, for installations where RegistryDNS is set up as 
> the last DNS server in a chain of DNS servers.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-9614) Support configurable container hostname formats for YARN services

2019-06-10 Thread Billie Rinaldi (JIRA)
Billie Rinaldi created YARN-9614:


 Summary: Support configurable container hostname formats for YARN 
services
 Key: YARN-9614
 URL: https://issues.apache.org/jira/browse/YARN-9614
 Project: Hadoop YARN
  Issue Type: Improvement
Reporter: Billie Rinaldi


The hostname format used by YARN services is currently 
instance.service.user.domain. We could allow this hostname format to be 
configurable (with some restrictions).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-9613) Avoid remote lookups for RegistryDNS domain

2019-06-10 Thread Billie Rinaldi (JIRA)
Billie Rinaldi created YARN-9613:


 Summary: Avoid remote lookups for RegistryDNS domain
 Key: YARN-9613
 URL: https://issues.apache.org/jira/browse/YARN-9613
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 3.1.2
Reporter: Billie Rinaldi


A typical setup for RegistryDNS is for an upstream DNS server to forward DNS 
queries matching the hadoop.registry.dns.domain-name to RegistryDNS. If the 
RegistryDNS lookup gets a non-zero DNS RCODE, RegistryDNS performs a remote 
lookup in upstream DNS servers. For bad queries, this can result in a loop when 
the upstream DNS server forwards the query back to RegistryDNS.

To solve this problem, we should avoid performing remote lookups for queries 
within hadoop.registry.dns.domain-name, which are expected to be handled by 
RegistryDNS. We may also want to evaluate whether we should add a configuration 
property that allows the user to disable remote lookups entirely for 
RegistryDNS, for installations where RegistryDNS is set up as the last DNS 
server in a chain of DNS servers.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9386) destroying yarn-service is allowed even though running state

2019-05-30 Thread Billie Rinaldi (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16851974#comment-16851974
 ] 

Billie Rinaldi commented on YARN-9386:
--

Thanks for the patch, [~kyungwan nam]! I would prefer this to be configurable, 
since people may already be relying on the current behavior. We could default 
to the current behavior and have a YARN configuration option that disables 
destroy for running apps. cc [~leftnoteasy]

> destroying yarn-service is allowed even though running state
> 
>
> Key: YARN-9386
> URL: https://issues.apache.org/jira/browse/YARN-9386
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-native-services
>Reporter: kyungwan nam
>Assignee: kyungwan nam
>Priority: Major
> Attachments: YARN-9386.001.patch, YARN-9386.002.patch
>
>
> It looks very dangerous to destroy a running app. It should not be allowed.
> {code}
> [yarn-ats@test ~]$ yarn app -list
> 19/03/12 17:48:49 INFO client.RMProxy: Connecting to ResourceManager at 
> test1.com/10.1.1.11:8050
> 19/03/12 17:48:50 INFO client.AHSProxy: Connecting to Application History 
> server at test1.com/10.1.1.101:10200
> Total number of applications (application-types: [], states: [SUBMITTED, 
> ACCEPTED, RUNNING] and tags: []):3
> Application-Id  Application-NameApplication-Type  
> User   Queue   State Final-State  
>ProgressTracking-URL
> application_1551250841677_0003fbyarn-service  
>ambari-qa default RUNNING   UNDEFINED  
>100% N/A
> application_1552379723611_0002   fb1yarn-service  
> yarn-ats default RUNNING   UNDEFINED  
>100% N/A
> application_1550801435420_0001 ats-hbaseyarn-service  
> yarn-ats default RUNNING   UNDEFINED  
>100% N/A
> [yarn-ats@test ~]$ yarn app -destroy fb1
> 19/03/12 17:49:02 INFO client.RMProxy: Connecting to ResourceManager at 
> test1.com/10.1.1.11:8050
> 19/03/12 17:49:02 INFO client.AHSProxy: Connecting to Application History 
> server at test1.com/10.1.1.101:10200
> 19/03/12 17:49:02 INFO client.RMProxy: Connecting to ResourceManager at 
> test1.com/10.1.1.11:8050
> 19/03/12 17:49:02 INFO client.AHSProxy: Connecting to Application History 
> server at test1.com/10.1.1.101:10200
> 19/03/12 17:49:02 INFO util.log: Logging initialized @1637ms
> 19/03/12 17:49:07 INFO client.ApiServiceClient: Successfully destroyed 
> service fb1
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Resolved] (YARN-9145) [Umbrella] Dynamically add or remove auxiliary services

2019-05-24 Thread Billie Rinaldi (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Billie Rinaldi resolved YARN-9145.
--
   Resolution: Fixed
Fix Version/s: 3.3.0

Yes, thanks for reminding me I forgot to close the umbrella!

> [Umbrella] Dynamically add or remove auxiliary services
> ---
>
> Key: YARN-9145
> URL: https://issues.apache.org/jira/browse/YARN-9145
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Reporter: Billie Rinaldi
>Assignee: Billie Rinaldi
>Priority: Major
> Fix For: 3.3.0
>
>
> Umbrella to track tasks supporting adding, removing, or updating auxiliary 
> services without NM restart.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9521) RM failed to start due to system services

2019-05-23 Thread Billie Rinaldi (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9521?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Billie Rinaldi updated YARN-9521:
-
Summary: RM failed to start due to system services  (was: RM filed to start 
due to system services)

> RM failed to start due to system services
> -
>
> Key: YARN-9521
> URL: https://issues.apache.org/jira/browse/YARN-9521
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.1.2
>Reporter: kyungwan nam
>Priority: Major
> Attachments: YARN-9521.001.patch
>
>
> when starting RM, listing system services directory has failed as follows.
> {code}
> 2019-04-30 17:18:25,441 INFO  client.SystemServiceManagerImpl 
> (SystemServiceManagerImpl.java:serviceInit(114)) - System Service Directory 
> is configured to /services
> 2019-04-30 17:18:25,467 INFO  client.SystemServiceManagerImpl 
> (SystemServiceManagerImpl.java:serviceInit(120)) - UserGroupInformation 
> initialized to yarn (auth:SIMPLE)
> 2019-04-30 17:18:25,467 INFO  service.AbstractService 
> (AbstractService.java:noteFailure(267)) - Service ResourceManager failed in 
> state STARTED
> org.apache.hadoop.service.ServiceStateException: java.io.IOException: 
> Filesystem closed
> at 
> org.apache.hadoop.service.ServiceStateException.convert(ServiceStateException.java:105)
> at 
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:203)
> at 
> org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:121)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$RMActiveServices.serviceStart(ResourceManager.java:869)
> at 
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startActiveServices(ResourceManager.java:1228)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$1.run(ResourceManager.java:1269)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$1.run(ResourceManager.java:1265)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.transitionToActive(ResourceManager.java:1265)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1316)
> at 
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1501)
> Caused by: java.io.IOException: Filesystem closed
> at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:473)
> at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:1639)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$DirListingIterator.(DistributedFileSystem.java:1217)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$DirListingIterator.(DistributedFileSystem.java:1233)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$DirListingIterator.(DistributedFileSystem.java:1200)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$26.doCall(DistributedFileSystem.java:1179)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$26.doCall(DistributedFileSystem.java:1175)
> at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatusIterator(DistributedFileSystem.java:1187)
> at 
> org.apache.hadoop.yarn.service.client.SystemServiceManagerImpl.list(SystemServiceManagerImpl.java:375)
> at 
> org.apache.hadoop.yarn.service.client.SystemServiceManagerImpl.scanForUserServices(SystemServiceManagerImpl.java:282)
> at 
> org.apache.hadoop.yarn.service.client.SystemServiceManagerImpl.serviceStart(SystemServiceManagerImpl.java:126)
> at 
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
> ... 13 more
> {code}
> it looks like due to the usage of filesystem cache.
> this issue does not happen, when I add "fs.hdfs.impl.disable.cache=true" to 
> yarn-site



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9254) Externalize Solr data storage

2019-04-19 Thread Billie Rinaldi (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9254?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16822137#comment-16822137
 ] 

Billie Rinaldi commented on YARN-9254:
--

I opened INFRA-18244 to figure out the Hadoop-trunk-Commit failures on H19.

> Externalize Solr data storage
> -
>
> Key: YARN-9254
> URL: https://issues.apache.org/jira/browse/YARN-9254
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: YARN-9254.001.patch, YARN-9254.002.patch, 
> YARN-9254.003.patch, YARN-9254.004.patch, YARN-9254.005.patch
>
>
> Application catalog contains embedded Solr.  By default, Solr data is stored 
> in temp space of the docker container.  For user who likes to persist Solr 
> data on HDFS, it would be nice to have a way to pass solr.hdfs.home setting 
> to embedded Solr to externalize Solr data storage.  This also implies passing 
> Kerberos credential settings to Solr JVM in order to access secure HDFS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9254) Externalize Solr data storage

2019-04-19 Thread Billie Rinaldi (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9254?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16822076#comment-16822076
 ] 

Billie Rinaldi commented on YARN-9254:
--

+1 for patch 5. I tested and verified that solr was using the HDFS directory 
specified. I also killed the container to see if a new container would be able 
to access the stored data. I had to manually remove the write lock at 
$SOLR_DATA_DIR/index/write.lock for a new container to be able to load the HDFS 
data. Once I did that, the new container was able to come up with the same 
running applications and an application template that I had registered in the 
first instance. So, the HDFS support is working, and I think we should open 
another ticket to support solr cloud mode, which would help address this 
locking issue. Thanks for the patch, [~eyang]!

> Externalize Solr data storage
> -
>
> Key: YARN-9254
> URL: https://issues.apache.org/jira/browse/YARN-9254
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: YARN-9254.001.patch, YARN-9254.002.patch, 
> YARN-9254.003.patch, YARN-9254.004.patch, YARN-9254.005.patch
>
>
> Application catalog contains embedded Solr.  By default, Solr data is stored 
> in temp space of the docker container.  For user who likes to persist Solr 
> data on HDFS, it would be nice to have a way to pass solr.hdfs.home setting 
> to embedded Solr to externalize Solr data storage.  This also implies passing 
> Kerberos credential settings to Solr JVM in order to access secure HDFS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8530) Add security filters to Application catalog

2019-04-16 Thread Billie Rinaldi (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16819328#comment-16819328
 ] 

Billie Rinaldi commented on YARN-8530:
--

Looks like this failure was due to protoc version: protoc version is 'libprotoc 
2.6.1', expected version is '2.5.0'.

> Add security filters to Application catalog
> ---
>
> Key: YARN-8530
> URL: https://issues.apache.org/jira/browse/YARN-8530
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: security, yarn-native-services
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: YARN-8530.001.patch, YARN-8530.002.patch, 
> YARN-8530.003.patch, YARN-8530.004.patch, YARN-8530.005.patch
>
>
> Application catalog UI does not have any security filter applied.  CORS 
> filter and Authentication filter are required to secure the web application.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9254) Externalize Solr data storage

2019-04-16 Thread Billie Rinaldi (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9254?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16819284#comment-16819284
 ] 

Billie Rinaldi commented on YARN-9254:
--

Thanks for the patch, [~eyang]! It looks like this one has a conflict. Please 
rebase.

> Externalize Solr data storage
> -
>
> Key: YARN-9254
> URL: https://issues.apache.org/jira/browse/YARN-9254
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: YARN-9254.001.patch, YARN-9254.002.patch, 
> YARN-9254.003.patch, YARN-9254.004.patch
>
>
> Application catalog contains embedded Solr.  By default, Solr data is stored 
> in temp space of the docker container.  For user who likes to persist Solr 
> data on HDFS, it would be nice to have a way to pass solr.hdfs.home setting 
> to embedded Solr to externalize Solr data storage.  This also implies passing 
> Kerberos credential settings to Solr JVM in order to access secure HDFS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9466) App catalog navigation stylesheet does not display correctly in Safari

2019-04-16 Thread Billie Rinaldi (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16819276#comment-16819276
 ] 

Billie Rinaldi commented on YARN-9466:
--

+1 for patch 2. This appears to fix the Safari issue. Thanks [~eyang]!

> App catalog navigation stylesheet does not display correctly in Safari
> --
>
> Key: YARN-9466
> URL: https://issues.apache.org/jira/browse/YARN-9466
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: YARN-9466.001.patch, YARN-9466.002.patch, 
> catalog-chrome.png, catalog-safari.png
>
>
> When navigation side bar has less content than right side table, the 
> navigation bar will shrink into smaller size in Safari.  See the attached 
> screenshot for the problem and desired looked.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8530) Add security filters to Application catalog

2019-04-16 Thread Billie Rinaldi (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16819251#comment-16819251
 ] 

Billie Rinaldi commented on YARN-8530:
--

+1 for patch 5. Thanks [~eyang]!

> Add security filters to Application catalog
> ---
>
> Key: YARN-8530
> URL: https://issues.apache.org/jira/browse/YARN-8530
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: security, yarn-native-services
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: YARN-8530.001.patch, YARN-8530.002.patch, 
> YARN-8530.003.patch, YARN-8530.004.patch, YARN-8530.005.patch
>
>
> Application catalog UI does not have any security filter applied.  CORS 
> filter and Authentication filter are required to secure the web application.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8530) Add security filters to Application catalog

2019-04-15 Thread Billie Rinaldi (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16818103#comment-16818103
 ] 

Billie Rinaldi commented on YARN-8530:
--

Thanks, [~eyang]. I tried out this patch and it worked in secure mode (although 
it should be noted that at first I only set hadoop.security.authentication and 
did not set hadoop.http.authentication.type to kerberos, and with those 
settings the catalog app did not work).

In insecure mode I got this error: java.lang.ClassCastException: 
org.apache.hadoop.http.lib.StaticUserWebFilter cannot be cast to 
javax.servlet.Filter.

> Add security filters to Application catalog
> ---
>
> Key: YARN-8530
> URL: https://issues.apache.org/jira/browse/YARN-8530
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: security, yarn-native-services
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: YARN-8530.001.patch, YARN-8530.002.patch, 
> YARN-8530.003.patch
>
>
> Application catalog UI does not have any security filter applied.  CORS 
> filter and Authentication filter are required to secure the web application.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9281) Add express upgrade button to Appcatalog UI

2019-04-13 Thread Billie Rinaldi (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16816988#comment-16816988
 ] 

Billie Rinaldi commented on YARN-9281:
--

+1 for patch 8. Thanks for the patch [~eyang] and for the review [~adam.antal]!

> Add express upgrade button to Appcatalog UI
> ---
>
> Key: YARN-9281
> URL: https://issues.apache.org/jira/browse/YARN-9281
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: YARN-9281.001.patch, YARN-9281.002.patch, 
> YARN-9281.003.patch, YARN-9281.004.patch, YARN-9281.005.patch, 
> YARN-9281.006.patch, YARN-9281.007.patch, YARN-9281.008.patch
>
>
> It would be nice to have ability to upgrade applications deployed by 
> Application catalog from Application catalog UI.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9255) Improve recommend applications order

2019-04-01 Thread Billie Rinaldi (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16806951#comment-16806951
 ] 

Billie Rinaldi commented on YARN-9255:
--

+1 for patch 3. Thanks, [~eyang]!

> Improve recommend applications order
> 
>
> Key: YARN-9255
> URL: https://issues.apache.org/jira/browse/YARN-9255
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 3.3.0
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: YARN-9255.001.patch, YARN-9255.002.patch, 
> YARN-9255.003.patch
>
>
> When there is no search term in application catalog, the recommended 
> application list is random.  The relevance can be fine tuned to be sorted by 
> number of downloads or alphabetic order.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7129) Application Catalog initial project setup and source

2019-03-29 Thread Billie Rinaldi (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-7129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16805041#comment-16805041
 ] 

Billie Rinaldi commented on YARN-7129:
--

Looks like discussion has resolved here. I am planning to commit patch 35 + 
YARN-9348. Thanks for the patch, [~eyang], as well as everyone who contributed 
to the discussion!

> Application Catalog initial project setup and source
> 
>
> Key: YARN-7129
> URL: https://issues.apache.org/jira/browse/YARN-7129
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: applications
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: YARN Appstore.pdf, YARN-7129.001.patch, 
> YARN-7129.002.patch, YARN-7129.003.patch, YARN-7129.004.patch, 
> YARN-7129.005.patch, YARN-7129.006.patch, YARN-7129.007.patch, 
> YARN-7129.008.patch, YARN-7129.009.patch, YARN-7129.010.patch, 
> YARN-7129.011.patch, YARN-7129.012.patch, YARN-7129.013.patch, 
> YARN-7129.014.patch, YARN-7129.015.patch, YARN-7129.016.patch, 
> YARN-7129.017.patch, YARN-7129.018.patch, YARN-7129.019.patch, 
> YARN-7129.020.patch, YARN-7129.021.patch, YARN-7129.022.patch, 
> YARN-7129.023.patch, YARN-7129.024.patch, YARN-7129.025.patch, 
> YARN-7129.026.patch, YARN-7129.027.patch, YARN-7129.028.patch, 
> YARN-7129.029.patch, YARN-7129.030.patch, YARN-7129.031.patch, 
> YARN-7129.032.patch, YARN-7129.033.patch, YARN-7129.034.patch, 
> YARN-7129.035.patch
>
>
> This task setup the maven project hadoop-yarn-application-catalog sub module. 
>  Webapp and docker images are sub-modules in hadoop-application-catalog 
> project.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Reopened] (YARN-9255) Improve recommend applications order

2019-03-12 Thread Billie Rinaldi (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Billie Rinaldi reopened YARN-9255:
--

> Improve recommend applications order
> 
>
> Key: YARN-9255
> URL: https://issues.apache.org/jira/browse/YARN-9255
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 3.3.0
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: YARN-9255.001.patch, YARN-9255.002.patch, 
> YARN-9255.003.patch
>
>
> When there is no search term in application catalog, the recommended 
> application list is random.  The relevance can be fine tuned to be sorted by 
> number of downloads or alphabetic order.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Reopened] (YARN-9348) Build issues on hadoop-yarn-application-catalog-webapp

2019-03-12 Thread Billie Rinaldi (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Billie Rinaldi reopened YARN-9348:
--

> Build issues on hadoop-yarn-application-catalog-webapp
> --
>
> Key: YARN-9348
> URL: https://issues.apache.org/jira/browse/YARN-9348
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 3.3.0
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: YARN-9348.001.patch, YARN-9348.002.patch, 
> YARN-9348.003.patch, YARN-9348.004.patch, YARN-9348.005.patch
>
>
> A couple reports jenkins precommit builds are failing due to integration 
> problem between nodejs libraries and Yetus.  Problems are:
> # Nodejs third party libraries are checked by whitespace check, which 
> generates many errors.  One possible solution is to move nodejs libraries 
> placement from project top level directory to target directory to prevent 
> stumble on whitespace checks.
> # maven clean fails because clean plugin tries to remove target directory and 
> files inside target/generated-sources directories to cause race conditions.
> # Building on mac will trigger access to osx keychain to attempt to login to 
> Dockerhub.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7129) Application Catalog for YARN applications

2019-03-12 Thread Billie Rinaldi (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-7129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16790876#comment-16790876
 ] 

Billie Rinaldi commented on YARN-7129:
--

Sure, reverting now. I am in favor of having a build flag for triggering the 
profile. This is an optional new YARN application and it doesn't need to 
require changes to the main build.

> Application Catalog for YARN applications
> -
>
> Key: YARN-7129
> URL: https://issues.apache.org/jira/browse/YARN-7129
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: applications
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: YARN Appstore.pdf, YARN-7129.001.patch, 
> YARN-7129.002.patch, YARN-7129.003.patch, YARN-7129.004.patch, 
> YARN-7129.005.patch, YARN-7129.006.patch, YARN-7129.007.patch, 
> YARN-7129.008.patch, YARN-7129.009.patch, YARN-7129.010.patch, 
> YARN-7129.011.patch, YARN-7129.012.patch, YARN-7129.013.patch, 
> YARN-7129.014.patch, YARN-7129.015.patch, YARN-7129.016.patch, 
> YARN-7129.017.patch, YARN-7129.018.patch, YARN-7129.019.patch, 
> YARN-7129.020.patch, YARN-7129.021.patch, YARN-7129.022.patch, 
> YARN-7129.023.patch, YARN-7129.024.patch, YARN-7129.025.patch, 
> YARN-7129.026.patch, YARN-7129.027.patch, YARN-7129.028.patch
>
>
> YARN native services provides web services API to improve usability of 
> application deployment on Hadoop using collection of docker images.  It would 
> be nice to have an application catalog system which provides an editorial and 
> search interface for YARN applications.  This improves usability of YARN for 
> manage the life cycle of applications.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9255) Improve recommend applications order

2019-03-12 Thread Billie Rinaldi (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16790765#comment-16790765
 ] 

Billie Rinaldi commented on YARN-9255:
--

Thanks [~eyang]. +1 for patch 3. I understand that the test timeout will be 
fixed in YARN-9281.

> Improve recommend applications order
> 
>
> Key: YARN-9255
> URL: https://issues.apache.org/jira/browse/YARN-9255
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 3.3.0
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: YARN-9255.001.patch, YARN-9255.002.patch, 
> YARN-9255.003.patch
>
>
> When there is no search term in application catalog, the recommended 
> application list is random.  The relevance can be fine tuned to be sorted by 
> number of downloads or alphabetic order.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9348) Build issues on hadoop-yarn-application-catalog-webapp

2019-03-07 Thread Billie Rinaldi (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16787010#comment-16787010
 ] 

Billie Rinaldi commented on YARN-9348:
--

Thanks [~eyang]!

> Build issues on hadoop-yarn-application-catalog-webapp
> --
>
> Key: YARN-9348
> URL: https://issues.apache.org/jira/browse/YARN-9348
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 3.3.0
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: YARN-9348.001.patch, YARN-9348.002.patch, 
> YARN-9348.003.patch, YARN-9348.004.patch, YARN-9348.005.patch
>
>
> A couple reports jenkins precommit builds are failing due to integration 
> problem between nodejs libraries and Yetus.  Problems are:
> # Nodejs third party libraries are checked by whitespace check, which 
> generates many errors.  One possible solution is to move nodejs libraries 
> placement from project top level directory to target directory to prevent 
> stumble on whitespace checks.
> # maven clean fails because clean plugin tries to remove target directory and 
> files inside target/generated-sources directories to cause race conditions.
> # Building on mac will trigger access to osx keychain to attempt to login to 
> Dockerhub.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9348) Build issues on hadoop-yarn-application-catalog-webapp

2019-03-06 Thread Billie Rinaldi (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16786191#comment-16786191
 ] 

Billie Rinaldi commented on YARN-9348:
--

I think we'll have to commit patch 5 to be able to get a clean build. I am +1 
for patch 5.

> Build issues on hadoop-yarn-application-catalog-webapp
> --
>
> Key: YARN-9348
> URL: https://issues.apache.org/jira/browse/YARN-9348
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 3.3.0
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: YARN-9348.001.patch, YARN-9348.002.patch, 
> YARN-9348.003.patch, YARN-9348.004.patch, YARN-9348.005.patch
>
>
> A couple reports jenkins precommit builds are failing due to integration 
> problem between nodejs libraries and Yetus.  Problems are:
> # Nodejs third party libraries are checked by whitespace check, which 
> generates many errors.  One possible solution is to move nodejs libraries 
> placement from project top level directory to target directory to prevent 
> stumble on whitespace checks.
> # maven clean fails because clean plugin tries to remove target directory and 
> files inside target/generated-sources directories to cause race conditions.
> # Building on mac will trigger access to osx keychain to attempt to login to 
> Dockerhub.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9348) Build issues on hadoop-yarn-application-catalog-webapp

2019-03-06 Thread Billie Rinaldi (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16786114#comment-16786114
 ] 

Billie Rinaldi commented on YARN-9348:
--

Patch 5 is looking good to me. Awaiting precommit build.

> Build issues on hadoop-yarn-application-catalog-webapp
> --
>
> Key: YARN-9348
> URL: https://issues.apache.org/jira/browse/YARN-9348
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 3.3.0
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: YARN-9348.001.patch, YARN-9348.002.patch, 
> YARN-9348.003.patch, YARN-9348.004.patch, YARN-9348.005.patch
>
>
> A couple reports jenkins precommit builds are failing due to integration 
> problem between nodejs libraries and Yetus.  Problems are:
> # Nodejs third party libraries are checked by whitespace check, which 
> generates many errors.  One possible solution is to move nodejs libraries 
> placement from project top level directory to target directory to prevent 
> stumble on whitespace checks.
> # maven clean fails because clean plugin tries to remove target directory and 
> files inside target/generated-sources directories to cause race conditions.
> # Building on mac will trigger access to osx keychain to attempt to login to 
> Dockerhub.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7129) Application Catalog for YARN applications

2019-03-06 Thread Billie Rinaldi (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-7129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16785699#comment-16785699
 ] 

Billie Rinaldi commented on YARN-7129:
--

Thanks for the head up, [~ste...@apache.org]. Looking into it now.

> Application Catalog for YARN applications
> -
>
> Key: YARN-7129
> URL: https://issues.apache.org/jira/browse/YARN-7129
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: applications
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: YARN Appstore.pdf, YARN-7129.001.patch, 
> YARN-7129.002.patch, YARN-7129.003.patch, YARN-7129.004.patch, 
> YARN-7129.005.patch, YARN-7129.006.patch, YARN-7129.007.patch, 
> YARN-7129.008.patch, YARN-7129.009.patch, YARN-7129.010.patch, 
> YARN-7129.011.patch, YARN-7129.012.patch, YARN-7129.013.patch, 
> YARN-7129.014.patch, YARN-7129.015.patch, YARN-7129.016.patch, 
> YARN-7129.017.patch, YARN-7129.018.patch, YARN-7129.019.patch, 
> YARN-7129.020.patch, YARN-7129.021.patch, YARN-7129.022.patch, 
> YARN-7129.023.patch, YARN-7129.024.patch, YARN-7129.025.patch, 
> YARN-7129.026.patch, YARN-7129.027.patch, YARN-7129.028.patch
>
>
> YARN native services provides web services API to improve usability of 
> application deployment on Hadoop using collection of docker images.  It would 
> be nice to have an application catalog system which provides an editorial and 
> search interface for YARN applications.  This improves usability of YARN for 
> manage the life cycle of applications.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7129) Application Catalog for YARN applications

2019-03-05 Thread Billie Rinaldi (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-7129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Billie Rinaldi updated YARN-7129:
-
Fix Version/s: 3.3.0

> Application Catalog for YARN applications
> -
>
> Key: YARN-7129
> URL: https://issues.apache.org/jira/browse/YARN-7129
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: applications
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: YARN Appstore.pdf, YARN-7129.001.patch, 
> YARN-7129.002.patch, YARN-7129.003.patch, YARN-7129.004.patch, 
> YARN-7129.005.patch, YARN-7129.006.patch, YARN-7129.007.patch, 
> YARN-7129.008.patch, YARN-7129.009.patch, YARN-7129.010.patch, 
> YARN-7129.011.patch, YARN-7129.012.patch, YARN-7129.013.patch, 
> YARN-7129.014.patch, YARN-7129.015.patch, YARN-7129.016.patch, 
> YARN-7129.017.patch, YARN-7129.018.patch, YARN-7129.019.patch, 
> YARN-7129.020.patch, YARN-7129.021.patch, YARN-7129.022.patch, 
> YARN-7129.023.patch, YARN-7129.024.patch, YARN-7129.025.patch, 
> YARN-7129.026.patch, YARN-7129.027.patch, YARN-7129.028.patch
>
>
> YARN native services provides web services API to improve usability of 
> application deployment on Hadoop using collection of docker images.  It would 
> be nice to have an application catalog system which provides an editorial and 
> search interface for YARN applications.  This improves usability of YARN for 
> manage the life cycle of applications.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7129) Application Catalog for YARN applications

2019-03-04 Thread Billie Rinaldi (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-7129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16784000#comment-16784000
 ] 

Billie Rinaldi commented on YARN-7129:
--

Thanks for working on this, [~eyang]! I am +1 for patch 28, if the precommit 
build goes through.

> Application Catalog for YARN applications
> -
>
> Key: YARN-7129
> URL: https://issues.apache.org/jira/browse/YARN-7129
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: applications
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: YARN Appstore.pdf, YARN-7129.001.patch, 
> YARN-7129.002.patch, YARN-7129.003.patch, YARN-7129.004.patch, 
> YARN-7129.005.patch, YARN-7129.006.patch, YARN-7129.007.patch, 
> YARN-7129.008.patch, YARN-7129.009.patch, YARN-7129.010.patch, 
> YARN-7129.011.patch, YARN-7129.012.patch, YARN-7129.013.patch, 
> YARN-7129.014.patch, YARN-7129.015.patch, YARN-7129.016.patch, 
> YARN-7129.017.patch, YARN-7129.018.patch, YARN-7129.019.patch, 
> YARN-7129.020.patch, YARN-7129.021.patch, YARN-7129.022.patch, 
> YARN-7129.023.patch, YARN-7129.024.patch, YARN-7129.025.patch, 
> YARN-7129.026.patch, YARN-7129.027.patch, YARN-7129.028.patch
>
>
> YARN native services provides web services API to improve usability of 
> application deployment on Hadoop using collection of docker images.  It would 
> be nice to have an application catalog system which provides an editorial and 
> search interface for YARN applications.  This improves usability of YARN for 
> manage the life cycle of applications.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-9339) Apps pending metric incorrect after moving app to a new queue

2019-02-28 Thread Billie Rinaldi (JIRA)
Billie Rinaldi created YARN-9339:


 Summary: Apps pending metric incorrect after moving app to a new 
queue
 Key: YARN-9339
 URL: https://issues.apache.org/jira/browse/YARN-9339
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Billie Rinaldi


I observed a cluster that had a high Apps Pending count that appeared to be 
incorrect. This seemed to be related to apps being moved to different queues. I 
tested by adding some logging to TestCapacityScheduler#testMoveAppBasic before 
and after a moveApplication call. Before the call appsPending was 1 and 
afterwards appsPending was 2.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9334) YARN Service Client does not work with SPNEGO when knox is configured

2019-02-28 Thread Billie Rinaldi (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9334?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Billie Rinaldi updated YARN-9334:
-
Fix Version/s: 3.1.3
   3.2.1

> YARN Service Client does not work with SPNEGO when knox is configured
> -
>
> Key: YARN-9334
> URL: https://issues.apache.org/jira/browse/YARN-9334
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-native-services
>Affects Versions: 3.1.0, 3.2.0
>Reporter: Tarun Parimi
>Assignee: Billie Rinaldi
>Priority: Major
> Fix For: 3.3.0, 3.2.1, 3.1.3
>
> Attachments: YARN-9334.01.patch
>
>
> When knox is configured, the configuration hadoop.http.authentication.type is 
> set to 
> org.apache.hadoop.security.authentication.server.JWTRedirectAuthenticationHandler
>  instead of kerberos.
> We have the following check in ApiServiceClient#getApiClient for kerberos.
> {code:java}
> if (conf.get("hadoop.http.authentication.type").equals("kerberos")) {
>   try {
> URI url = new URI(requestPath);
> String challenge = YarnClientUtils.generateToken(url.getHost());
> builder.header(HttpHeaders.AUTHORIZATION, "Negotiate " + challenge);
>   } catch (Exception e) {
> throw new IOException(e);
>   }
> }
> {code}
> So we always get 401 error since there is no auth handled for knox.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9334) YARN Service Client does not work with SPNEGO when knox is configured

2019-02-27 Thread Billie Rinaldi (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9334?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16780053#comment-16780053
 ] 

Billie Rinaldi commented on YARN-9334:
--

Thanks for the review, [~eyang]! Any objections for me to backport this to 3.2 
and 3.1 as well?

> YARN Service Client does not work with SPNEGO when knox is configured
> -
>
> Key: YARN-9334
> URL: https://issues.apache.org/jira/browse/YARN-9334
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-native-services
>Affects Versions: 3.1.0, 3.2.0
>Reporter: Tarun Parimi
>Assignee: Billie Rinaldi
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: YARN-9334.01.patch
>
>
> When knox is configured, the configuration hadoop.http.authentication.type is 
> set to 
> org.apache.hadoop.security.authentication.server.JWTRedirectAuthenticationHandler
>  instead of kerberos.
> We have the following check in ApiServiceClient#getApiClient for kerberos.
> {code:java}
> if (conf.get("hadoop.http.authentication.type").equals("kerberos")) {
>   try {
> URI url = new URI(requestPath);
> String challenge = YarnClientUtils.generateToken(url.getHost());
> builder.header(HttpHeaders.AUTHORIZATION, "Negotiate " + challenge);
>   } catch (Exception e) {
> throw new IOException(e);
>   }
> }
> {code}
> So we always get 401 error since there is no auth handled for knox.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9334) YARN Service Client does not work with SPNEGO when knox is configured

2019-02-27 Thread Billie Rinaldi (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9334?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Billie Rinaldi updated YARN-9334:
-
Attachment: YARN-9334.01.patch

> YARN Service Client does not work with SPNEGO when knox is configured
> -
>
> Key: YARN-9334
> URL: https://issues.apache.org/jira/browse/YARN-9334
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-native-services
>Affects Versions: 3.1.0, 3.2.0
>Reporter: Tarun Parimi
>Assignee: Billie Rinaldi
>Priority: Major
> Attachments: YARN-9334.01.patch
>
>
> When knox is configured, the configuration hadoop.http.authentication.type is 
> set to 
> org.apache.hadoop.security.authentication.server.JWTRedirectAuthenticationHandler
>  instead of kerberos.
> We have the following check in ApiServiceClient#getApiClient for kerberos.
> {code:java}
> if (conf.get("hadoop.http.authentication.type").equals("kerberos")) {
>   try {
> URI url = new URI(requestPath);
> String challenge = YarnClientUtils.generateToken(url.getHost());
> builder.header(HttpHeaders.AUTHORIZATION, "Negotiate " + challenge);
>   } catch (Exception e) {
> throw new IOException(e);
>   }
> }
> {code}
> So we always get 401 error since there is no auth handled for knox.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-9334) YARN Service Client does not work with SPNEGO when knox is configured

2019-02-27 Thread Billie Rinaldi (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9334?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Billie Rinaldi reassigned YARN-9334:


Assignee: Billie Rinaldi

> YARN Service Client does not work with SPNEGO when knox is configured
> -
>
> Key: YARN-9334
> URL: https://issues.apache.org/jira/browse/YARN-9334
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-native-services
>Affects Versions: 3.1.0, 3.2.0
>Reporter: Tarun Parimi
>Assignee: Billie Rinaldi
>Priority: Major
> Attachments: YARN-9334.01.patch
>
>
> When knox is configured, the configuration hadoop.http.authentication.type is 
> set to 
> org.apache.hadoop.security.authentication.server.JWTRedirectAuthenticationHandler
>  instead of kerberos.
> We have the following check in ApiServiceClient#getApiClient for kerberos.
> {code:java}
> if (conf.get("hadoop.http.authentication.type").equals("kerberos")) {
>   try {
> URI url = new URI(requestPath);
> String challenge = YarnClientUtils.generateToken(url.getHost());
> builder.header(HttpHeaders.AUTHORIZATION, "Negotiate " + challenge);
>   } catch (Exception e) {
> throw new IOException(e);
>   }
> }
> {code}
> So we always get 401 error since there is no auth handled for knox.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8761) Service AM support for decommissioning component instances

2019-02-08 Thread Billie Rinaldi (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Billie Rinaldi updated YARN-8761:
-
Hadoop Flags:   (was: Incompatible change)

> Service AM support for decommissioning component instances
> --
>
> Key: YARN-8761
> URL: https://issues.apache.org/jira/browse/YARN-8761
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Billie Rinaldi
>Assignee: Billie Rinaldi
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: YARN-8761.01.patch, YARN-8761.02.patch, 
> YARN-8761.03.patch, YARN-8761.04.patch, YARN-8761.05.patch
>
>
> The idea behind this feature is to have a flex down where specific component 
> instances are removed. Currently on a flex down, the service AM chooses for 
> removal the component instances with the highest IDs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8761) Service AM support for decommissioning component instances

2019-02-08 Thread Billie Rinaldi (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Billie Rinaldi updated YARN-8761:
-
Attachment: YARN-8761-branch-3.1.01.patch

> Service AM support for decommissioning component instances
> --
>
> Key: YARN-8761
> URL: https://issues.apache.org/jira/browse/YARN-8761
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Billie Rinaldi
>Assignee: Billie Rinaldi
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: YARN-8761-branch-3.1.01.patch, YARN-8761.01.patch, 
> YARN-8761.02.patch, YARN-8761.03.patch, YARN-8761.04.patch, YARN-8761.05.patch
>
>
> The idea behind this feature is to have a flex down where specific component 
> instances are removed. Currently on a flex down, the service AM chooses for 
> removal the component instances with the highest IDs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8761) Service AM support for decommissioning component instances

2019-02-08 Thread Billie Rinaldi (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Billie Rinaldi updated YARN-8761:
-
Target Version/s: 3.3.0, 3.2.1, 3.1.3  (was: 3.3.0)

> Service AM support for decommissioning component instances
> --
>
> Key: YARN-8761
> URL: https://issues.apache.org/jira/browse/YARN-8761
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Billie Rinaldi
>Assignee: Billie Rinaldi
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: YARN-8761.01.patch, YARN-8761.02.patch, 
> YARN-8761.03.patch, YARN-8761.04.patch, YARN-8761.05.patch
>
>
> The idea behind this feature is to have a flex down where specific component 
> instances are removed. Currently on a flex down, the service AM chooses for 
> removal the component instances with the highest IDs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Reopened] (YARN-8761) Service AM support for decommissioning component instances

2019-02-08 Thread Billie Rinaldi (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Billie Rinaldi reopened YARN-8761:
--

> Service AM support for decommissioning component instances
> --
>
> Key: YARN-8761
> URL: https://issues.apache.org/jira/browse/YARN-8761
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Billie Rinaldi
>Assignee: Billie Rinaldi
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: YARN-8761.01.patch, YARN-8761.02.patch, 
> YARN-8761.03.patch, YARN-8761.04.patch, YARN-8761.05.patch
>
>
> The idea behind this feature is to have a flex down where specific component 
> instances are removed. Currently on a flex down, the service AM chooses for 
> removal the component instances with the highest IDs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9184) Docker run doesn't pull down latest image if the image exists locally

2019-02-07 Thread Billie Rinaldi (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16763244#comment-16763244
 ] 

Billie Rinaldi commented on YARN-9184:
--

Pulling an image by digest looks relevant: 
https://docs.docker.com/engine/reference/commandline/pull/#pull-an-image-by-digest-immutable-identifier

> Docker run doesn't pull down latest image if the image exists locally 
> --
>
> Key: YARN-9184
> URL: https://issues.apache.org/jira/browse/YARN-9184
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Affects Versions: 3.1.0, 3.0.3
>Reporter: Zhaohui Xin
>Assignee: Zhaohui Xin
>Priority: Major
> Attachments: YARN-9184.001.patch, YARN-9184.002.patch, 
> YARN-9184.003.patch, YARN-9184.004.patch
>
>
> See [docker run doesn't pull down latest image if the image exists 
> locally|https://github.com/moby/moby/issues/13331].
> So, I think we should pull image before run to make image always latest.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8761) Service AM support for decommissioning component instances

2019-02-05 Thread Billie Rinaldi (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16761343#comment-16761343
 ] 

Billie Rinaldi commented on YARN-8761:
--

[~eyang] [~leftnoteasy] [~sunilg] I'm interested in backporting this patch to 
branch-3.2 and branch-3.1. This patch will be useful for platforms such as 
OpenWhisk that have the ability to remove specific containers when scaling down.

I'm not sure I agree that this is an incompatible change. For apps that require 
the component instance numbers to be linearly increasing, users can avoid using 
the decommission capability and instead can use the original flex capability 
for removing containers.

> Service AM support for decommissioning component instances
> --
>
> Key: YARN-8761
> URL: https://issues.apache.org/jira/browse/YARN-8761
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Billie Rinaldi
>Assignee: Billie Rinaldi
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: YARN-8761.01.patch, YARN-8761.02.patch, 
> YARN-8761.03.patch, YARN-8761.04.patch, YARN-8761.05.patch
>
>
> The idea behind this feature is to have a flex down where specific component 
> instances are removed. Currently on a flex down, the service AM chooses for 
> removal the component instances with the highest IDs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9190) [Submarine] Submarine job will fail to run as a first job on a new created Hadoop 3.2.0 RC1 cluster

2019-02-01 Thread Billie Rinaldi (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16758365#comment-16758365
 ] 

Billie Rinaldi commented on YARN-9190:
--

[~tangzhankun] Since I haven't seen this locally, I'll suggest some debugging 
tips:
* Check the client output for ApiServiceClient vs. ServiceClient logs. You 
should see ApiServiceClient, which is the one that goes through the RM REST API.
* Check the RM log for a POST: createService log by ApiServer. After that there 
should be ServiceClient logs describing what it's doing: using an existing 
tarball, uploading a new tarball for reuse (only allowed for hdfs or yarn 
admin), or uploading a temporary tarball just for this app.
* Find the tarball the AM is using (check the AM log and/or look in the 
container directory) and check which jars it contains. Compare the jar list 
with the contents of a good tarball, such as the one uploaded with yarn app 
-enableFastLaunch.

> [Submarine] Submarine job will fail to run as a first job on a new created 
> Hadoop 3.2.0 RC1 cluster
> ---
>
> Key: YARN-9190
> URL: https://issues.apache.org/jira/browse/YARN-9190
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Zhankun Tang
>Assignee: Sunil Govindan
>Priority: Major
>
> This issue was found when verifying submarine in Hadoop 3.2.0 RC1 planning. 
> The reproduce steps are:
>  # Init a new HDFS and YARN (LinuxContainerExecutor and Docker enabled)
>  # Before run any other yarn service job, use yarn user to submit a submarine 
> job
> The job will fail with below error:
>  
> {code:java}
> LogType:serviceam-err.txt
> LogLastModifiedTime:Thu Jan 10 21:15:23 +0800 2019
> LogLength:86
> LogContents:
> Error: Could not find or load main class 
> org.apache.hadoop.yarn.service.ServiceMaster
> End of LogType:serviceam-err.txt
> {code}
> This seems because the dependencies are not ready as the service client 
> reported:
> {code:java}
> 2019-01-10 21:50:47,380 WARN client.ServiceClient: Property 
> yarn.service.framework.path has a value 
> /yarn-services/3.2.0/service-dep.tar.gz, but is not a valid file
> 2019-01-10 21:50:47,381 INFO client.ServiceClient: Uploading all dependency 
> jars to HDFS. For faster submission of apps, set config property 
> yarn.service.framework.path to the dependency tarball location. Dependency 
> tarball can be uploaded to any HDFS path directly or by using command: yarn 
> app -enableFastLaunch []{code}
>  
> When this error happens, I found that there is no “/yarn-services” directory 
> created in HDFS.
> But after I run “yarn app -launch my-sleeper sleeper”, the “/yarn-services” 
> created in HDFS and then the submarine job can run successfully.
> {code:java}
> yarn@master0-VirtualBox:~/apache-hadoop-install-dir/hadoop-dev-workspace$ 
> hdfs dfs -ls /yarn-services/3.2.0/*
> -rwxr-xr-x 1 yarn supergroup 93596476 2019-01-11 08:23 
> /yarn-services/3.2.0/service-dep.tar.gz{code}
> It seems an issue of yarn service in 3.2.0 RC1 and I files this Jira to track 
> it.
>  
> And verified that trunk branch doesn't have this issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7129) Application Catalog for YARN applications

2019-01-30 Thread Billie Rinaldi (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-7129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16756339#comment-16756339
 ] 

Billie Rinaldi commented on YARN-7129:
--

Thanks for working on this, [~eyang]. The catalog looks like a promising 
feature. I have started reviewing patch 18. Here is a first round of comments:
 - Service catalog might be a better name, since the catalog handles YARN 
services and not arbitrary applications. In that case I’d suggest moving the 
module to 
hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-catalog.
 - NOTICE says “binary distribution of this product bundles binaries of ___” 
but it should be “source distribution bundles ___.” Also, this information 
should go in LICENSE instead of NOTICE for non-ASLv2 permissive licenses (see 
[http://www.apache.org/dev/licensing-howto.html#permissive-deps]). It would be 
helpful to include the path(s) to the dependencies, so it’s easier to tell when 
included source code has proper handling in LICENSE and NOTICE and when it 
doesn’t. I have not checked whether all the external code added in this patch 
has been included properly in L yet.
 - There are files named bootstrap-hwx.js and bootstrap-hwx.min.js.
 - 8080 is a difficult port. If you ran the catalog on an Ambari-installed 
cluster, it would fail if it came up on the Ambari server host. Does the app 
catalog need to have net=host? If not, the port wouldn’t be a problem.
 - I don’t think we should set net=host for the examples, because that setting 
is not recommended for most containers. Also, for the services using 8080, they 
could run into conflicts when net=host.
 - I think it would be better to have fewer examples in samples.xml and make 
sure that they all work (the pcjs image doesn’t exist on dockerhub, and we 
might not want to seem to be “endorsing” all the example images that are 
currently included). Maybe just include httpd and Jenkins? Registry would also 
be a reasonable candidate, but it didn’t work when I tried it with insecure 
limit users mode (it failed to make a dir when pushing an image to it; maybe 
would work as a privileged container with image name library/registry:latest).
 - entrypoint.sh needs modifications to make insecure mode possible (maybe 
checking for empty KEYTAB variable would be sufficient, since -e returns true 
for an empty variable).
 - Need better defaults for memory requests. When I tested, the catalog and 
Jenkins containers were killed due to OOM. 2048 for the catalog and 1024 for 
Jenkins worked for me.
 - I had to set JAVA_HOME in the catalog service json because the container and 
host didn’t have the same JAVA_HOME.
 - It would be good to include the service json needed to launch the catalog. 
We’d need to make the catalog Docker image version a variable for maven to fill 
in during the build process. Maybe the catalog could be one of the service 
examples, like sleeper and httpd.
 - Downloading from archive.apache.org is discouraged. Is there anything else 
we can do instead for the solr download?
 - Applications launched by the catalog are run as the root user (presumably 
because the catalog is a privileged container); at least that’s what is 
happening in insecure mode. The catalog should have user auth and launch the 
apps as the end user. I see there are already tickets open to address this 
issue.
 - We need to work out persistent storage for some of these services (including 
the catalog), or users will get a bad surprise when services or individual 
containers are restarted.
 - Restart doesn’t seem to work. Looks like it is issuing a start, so maybe 
should be named start instead. Restart implies stopping + starting.
 - Could we use AppAdminClient/ApiServiceClient instead of copying the 
getApiClient methods from ApiServiceClient to make the REST calls?
 - After deploying a service, it drops to the bottom of the list on the main 
catalog page. Is that intentional?
 - If you click on a UI link too early, it brings up a URL such as 
{{jenkins.${service_name}.${user}.${domain}:8080}}. It would be better to 
disallow clicking on the link until the variables are populated, if possible.

> Application Catalog for YARN applications
> -
>
> Key: YARN-7129
> URL: https://issues.apache.org/jira/browse/YARN-7129
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: applications
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: YARN Appstore.pdf, YARN-7129.001.patch, 
> YARN-7129.002.patch, YARN-7129.003.patch, YARN-7129.004.patch, 
> YARN-7129.005.patch, YARN-7129.006.patch, YARN-7129.007.patch, 
> YARN-7129.008.patch, YARN-7129.009.patch, YARN-7129.010.patch, 
> YARN-7129.011.patch, YARN-7129.012.patch, YARN-7129.013.patch, 
> YARN-7129.014.patch, YARN-7129.015.patch, 

[jira] [Commented] (YARN-9221) Add a flag to enable dynamic auxiliary service feature

2019-01-25 Thread Billie Rinaldi (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16752639#comment-16752639
 ] 

Billie Rinaldi commented on YARN-9221:
--

Thanks [~eyang]! Patch 3 fixes the javadoc issue and uses 400 Bad Request when 
the feature is disabled.

> Add a flag to enable dynamic auxiliary service feature
> --
>
> Key: YARN-9221
> URL: https://issues.apache.org/jira/browse/YARN-9221
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Yang
>Assignee: Billie Rinaldi
>Priority: Major
> Attachments: YARN-9221.01.patch, YARN-9221.02.patch, 
> YARN-9221.03.patch
>
>
> Dynamic auxiliary feature enables ability to reconfigure YARN auxiliary 
> service on demand.  This feature is optional, and it would be nice to be 
> disabled by default.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9221) Add a flag to enable dynamic auxiliary service feature

2019-01-25 Thread Billie Rinaldi (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Billie Rinaldi updated YARN-9221:
-
Attachment: YARN-9221.03.patch

> Add a flag to enable dynamic auxiliary service feature
> --
>
> Key: YARN-9221
> URL: https://issues.apache.org/jira/browse/YARN-9221
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Yang
>Assignee: Billie Rinaldi
>Priority: Major
> Attachments: YARN-9221.01.patch, YARN-9221.02.patch, 
> YARN-9221.03.patch
>
>
> Dynamic auxiliary feature enables ability to reconfigure YARN auxiliary 
> service on demand.  This feature is optional, and it would be nice to be 
> disabled by default.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9089) Add Terminal Link to Service component instance page for UI2

2019-01-25 Thread Billie Rinaldi (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16752627#comment-16752627
 ] 

Billie Rinaldi commented on YARN-9089:
--

[~sunilg] No, all the subtasks under YARN-8762 would need to be included.

> Add Terminal Link to Service component instance page for UI2
> 
>
> Key: YARN-9089
> URL: https://issues.apache.org/jira/browse/YARN-9089
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-ui-v2
>Affects Versions: 3.3.0
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: YARN-9089.001.patch, YARN-9089.002.patch, 
> YARN-9089.003.patch, YARN-9089.004.patch
>
>
> In UI2, Service > Component > Component Instance uses Timeline server to 
> aggregate information about Service component instance.  Timeline server does 
> not have the full information like the port number of the node manager, or 
> the web protocol used by the node manager.  It requires some changes to 
> aggregate node manager information into Timeline server in order to compute 
> the Terminal link.  For reducing the scope of YARN-8914, it is better file 
> this as a separate task.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9221) Add a flag to enable dynamic auxiliary service feature

2019-01-24 Thread Billie Rinaldi (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16751591#comment-16751591
 ] 

Billie Rinaldi commented on YARN-9221:
--

Thanks [~eyang]! I improved the documentation in patch 2, as well as hopefully 
addressing the build errors.

> Add a flag to enable dynamic auxiliary service feature
> --
>
> Key: YARN-9221
> URL: https://issues.apache.org/jira/browse/YARN-9221
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Yang
>Assignee: Billie Rinaldi
>Priority: Major
> Attachments: YARN-9221.01.patch, YARN-9221.02.patch
>
>
> Dynamic auxiliary feature enables ability to reconfigure YARN auxiliary 
> service on demand.  This feature is optional, and it would be nice to be 
> disabled by default.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9221) Add a flag to enable dynamic auxiliary service feature

2019-01-24 Thread Billie Rinaldi (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Billie Rinaldi updated YARN-9221:
-
Attachment: YARN-9221.02.patch

> Add a flag to enable dynamic auxiliary service feature
> --
>
> Key: YARN-9221
> URL: https://issues.apache.org/jira/browse/YARN-9221
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Yang
>Assignee: Billie Rinaldi
>Priority: Major
> Attachments: YARN-9221.01.patch, YARN-9221.02.patch
>
>
> Dynamic auxiliary feature enables ability to reconfigure YARN auxiliary 
> service on demand.  This feature is optional, and it would be nice to be 
> disabled by default.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9221) Add a flag to enable dynamic auxiliary service feature

2019-01-23 Thread Billie Rinaldi (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Billie Rinaldi updated YARN-9221:
-
Attachment: YARN-9221.01.patch

> Add a flag to enable dynamic auxiliary service feature
> --
>
> Key: YARN-9221
> URL: https://issues.apache.org/jira/browse/YARN-9221
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Yang
>Assignee: Billie Rinaldi
>Priority: Major
> Attachments: YARN-9221.01.patch
>
>
> Dynamic auxiliary feature enables ability to reconfigure YARN auxiliary 
> service on demand.  This feature is optional, and it would be nice to be 
> disabled by default.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9146) REST API to trigger storing auxiliary manifest file and publish to NMs

2019-01-22 Thread Billie Rinaldi (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16749001#comment-16749001
 ] 

Billie Rinaldi commented on YARN-9146:
--

Fixed site formatting issue in patch 3.

> REST API to trigger storing auxiliary manifest file and publish to NMs
> --
>
> Key: YARN-9146
> URL: https://issues.apache.org/jira/browse/YARN-9146
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Yang
>Assignee: Billie Rinaldi
>Priority: Major
> Attachments: YARN-9146.01.patch, YARN-9146.02.patch, 
> YARN-9146.03.patch
>
>
> System administrator can change auxiliary service by configuration and 
> restart node managers for the change to take effect.  The new auxiliary 
> service design allows service manifest file to be stored in hdfs or local 
> file system.  This task is to create a set of REST API to update auxiliary 
> service manifest file, and communication protocol to notify NMs to reload 
> auxiliary services.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9146) REST API to trigger storing auxiliary manifest file and publish to NMs

2019-01-22 Thread Billie Rinaldi (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9146?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Billie Rinaldi updated YARN-9146:
-
Attachment: YARN-9146.03.patch

> REST API to trigger storing auxiliary manifest file and publish to NMs
> --
>
> Key: YARN-9146
> URL: https://issues.apache.org/jira/browse/YARN-9146
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Yang
>Assignee: Billie Rinaldi
>Priority: Major
> Attachments: YARN-9146.01.patch, YARN-9146.02.patch, 
> YARN-9146.03.patch
>
>
> System administrator can change auxiliary service by configuration and 
> restart node managers for the change to take effect.  The new auxiliary 
> service design allows service manifest file to be stored in hdfs or local 
> file system.  This task is to create a set of REST API to update auxiliary 
> service manifest file, and communication protocol to notify NMs to reload 
> auxiliary services.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9197) NPE in service AM when failed to launch container

2019-01-18 Thread Billie Rinaldi (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16746582#comment-16746582
 ] 

Billie Rinaldi commented on YARN-9197:
--

Thanks for the patch, [~kyungwan nam]!

> NPE in service AM when failed to launch container
> -
>
> Key: YARN-9197
> URL: https://issues.apache.org/jira/browse/YARN-9197
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-native-services
>Reporter: kyungwan nam
>Assignee: kyungwan nam
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: YARN-9197.001.patch
>
>
> I’ve met NPE in service AM as follows.
> {code}
> 2019-01-02 22:35:47,582 [Component  dispatcher] INFO  component.Component - 
> [COMPONENT regionserver]: Assigned container_e15_1542704944343_0001_01_01 
> to component instance regionserver-1 and launch on host test2.com:45454 
> 2019-01-02 22:35:47,588 [pool-6-thread-5] WARN  ipc.Client - Exception 
> encountered while connecting to the server : 
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.token.SecretManager$InvalidToken):
>  token (token for yarn-ats: HDFS_DELEGATION_TOKEN owner=yarn-ats, 
> renewer=yarn, realUser=rm/test1.nfra...@example.com, issueDate=1542704946397, 
> maxDate=1543309746397, sequenceNumber=97, masterKeyId=90) can't be found in 
> cache
> 2019-01-02 22:35:47,592 [pool-6-thread-5] ERROR 
> containerlaunch.ContainerLaunchService - [COMPINSTANCE regionserver-1 : 
> container_e15_1542704944343_0001_01_01]: Failed to launch container.
> java.io.IOException: Package doesn't exist as a resource: 
> /hdp/apps/3.0.0.0-1634/hbase/hbase.tar.gz
>   at 
> org.apache.hadoop.yarn.service.provider.tarball.TarballProviderService.processArtifact(TarballProviderService.java:41)
>   at 
> org.apache.hadoop.yarn.service.provider.AbstractProviderService.buildContainerLaunchContext(AbstractProviderService.java:144)
>   at 
> org.apache.hadoop.yarn.service.containerlaunch.ContainerLaunchService$ContainerLauncher.run(ContainerLaunchService.java:107)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> 2019-01-02 22:35:47,592 [Component  dispatcher] INFO  component.Component - 
> [COMPONENT regionserver] Requesting for 1 container(s)
> 2019-01-02 22:35:47,592 [Component  dispatcher] INFO  component.Component - 
> [COMPONENT regionserver] Submitting scheduling request: 
> SchedulingRequestPBImpl{priority=1, allocationReqId=1, 
> executionType={Execution Type: GUARANTEED, Enforce Execution Type: true}, 
> allocationTags=[regionserver], 
> resourceSizing=ResourceSizingPBImpl{numAllocations=1, resources= vCores:1>}, placementConstraint=notin,node,regionserver}
> 2019-01-02 22:35:47,593 [Component  dispatcher] INFO  
> instance.ComponentInstance - [COMPINSTANCE regionserver-1 : 
> container_e15_1542704944343_0001_01_01]: 
> container_e15_1542704944343_0001_01_01 completed. Reinsert back to 
> pending list and requested a new container.
>  exitStatus=null, diagnostics=failed before launch
> 2019-01-02 22:35:47,593 [Component  dispatcher] INFO  
> instance.ComponentInstance - Publishing component instance status 
> container_e15_1542704944343_0001_01_01 FAILED 
> 2019-01-02 22:35:47,593 [Component  dispatcher] ERROR 
> service.ServiceScheduler - [COMPINSTANCE regionserver-1 : 
> container_e15_1542704944343_0001_01_01]: Error in handling event type STOP
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.yarn.service.component.instance.ComponentInstance.handleComponentInstanceRelaunch(ComponentInstance.java:342)
>   at 
> org.apache.hadoop.yarn.service.component.instance.ComponentInstance$ContainerStoppedTransition.transition(ComponentInstance.java:482)
>   at 
> org.apache.hadoop.yarn.service.component.instance.ComponentInstance$ContainerStoppedTransition.transition(ComponentInstance.java:375)
>   at 
> org.apache.hadoop.yarn.state.StateMachineFactory$SingleInternalArc.doTransition(StateMachineFactory.java:362)
>   at 
> org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:302)
>   at 
> org.apache.hadoop.yarn.state.StateMachineFactory.access$500(StateMachineFactory.java:46)
>   at 
> org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:487)
>   at 
> org.apache.hadoop.yarn.service.component.instance.ComponentInstance.handle(ComponentInstance.java:679)
>   at 
> org.apache.hadoop.yarn.service.ServiceScheduler$ComponentInstanceEventHandler.handle(ServiceScheduler.java:654)
>   at 
> 

[jira] [Commented] (YARN-9146) REST API to trigger storing auxiliary manifest file and publish to NMs

2019-01-18 Thread Billie Rinaldi (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16746580#comment-16746580
 ] 

Billie Rinaldi commented on YARN-9146:
--

Thanks for the review, [~eyang]. I've attached patch 2 that reports a bad 
request when no data is sent.

> REST API to trigger storing auxiliary manifest file and publish to NMs
> --
>
> Key: YARN-9146
> URL: https://issues.apache.org/jira/browse/YARN-9146
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Yang
>Assignee: Billie Rinaldi
>Priority: Major
> Attachments: YARN-9146.01.patch, YARN-9146.02.patch
>
>
> System administrator can change auxiliary service by configuration and 
> restart node managers for the change to take effect.  The new auxiliary 
> service design allows service manifest file to be stored in hdfs or local 
> file system.  This task is to create a set of REST API to update auxiliary 
> service manifest file, and communication protocol to notify NMs to reload 
> auxiliary services.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9146) REST API to trigger storing auxiliary manifest file and publish to NMs

2019-01-18 Thread Billie Rinaldi (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9146?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Billie Rinaldi updated YARN-9146:
-
Attachment: YARN-9146.02.patch

> REST API to trigger storing auxiliary manifest file and publish to NMs
> --
>
> Key: YARN-9146
> URL: https://issues.apache.org/jira/browse/YARN-9146
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Yang
>Assignee: Billie Rinaldi
>Priority: Major
> Attachments: YARN-9146.01.patch, YARN-9146.02.patch
>
>
> System administrator can change auxiliary service by configuration and 
> restart node managers for the change to take effect.  The new auxiliary 
> service design allows service manifest file to be stored in hdfs or local 
> file system.  This task is to create a set of REST API to update auxiliary 
> service manifest file, and communication protocol to notify NMs to reload 
> auxiliary services.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9190) [Submarine] Submarine job will fail to run as a first job on a new created Hadoop 3.2.0 RC1 cluster

2019-01-15 Thread Billie Rinaldi (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16743262#comment-16743262
 ] 

Billie Rinaldi commented on YARN-9190:
--

The problem was fixed in YARN-9001. It changes to using AppAdminClient instead 
of using ServiceClient directly. AppAdminClient uses the RM REST API to launch 
the app (so service.libdir is already defined).

> [Submarine] Submarine job will fail to run as a first job on a new created 
> Hadoop 3.2.0 RC1 cluster
> ---
>
> Key: YARN-9190
> URL: https://issues.apache.org/jira/browse/YARN-9190
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Zhankun Tang
>Assignee: Sunil Govindan
>Priority: Minor
>
> This issue was found when verifying submarine in Hadoop 3.2.0 RC1 planning. 
> The reproduce steps are:
>  # Init a new HDFS and YARN (LinuxContainerExecutor and Docker enabled)
>  # Before run any other yarn service job, use yarn user to submit a submarine 
> job
> The job will fail with below error:
>  
> {code:java}
> LogType:serviceam-err.txt
> LogLastModifiedTime:Thu Jan 10 21:15:23 +0800 2019
> LogLength:86
> LogContents:
> Error: Could not find or load main class 
> org.apache.hadoop.yarn.service.ServiceMaster
> End of LogType:serviceam-err.txt
> {code}
> This seems because the dependencies are not ready as the service client 
> reported:
> {code:java}
> 2019-01-10 21:50:47,380 WARN client.ServiceClient: Property 
> yarn.service.framework.path has a value 
> /yarn-services/3.2.0/service-dep.tar.gz, but is not a valid file
> 2019-01-10 21:50:47,381 INFO client.ServiceClient: Uploading all dependency 
> jars to HDFS. For faster submission of apps, set config property 
> yarn.service.framework.path to the dependency tarball location. Dependency 
> tarball can be uploaded to any HDFS path directly or by using command: yarn 
> app -enableFastLaunch []{code}
>  
> When this error happens, I found that there is no “/yarn-services” directory 
> created in HDFS.
> But after I run “yarn app -launch my-sleeper sleeper”, the “/yarn-services” 
> created in HDFS and then the submarine job can run successfully.
> {code:java}
> yarn@master0-VirtualBox:~/apache-hadoop-install-dir/hadoop-dev-workspace$ 
> hdfs dfs -ls /yarn-services/3.2.0/*
> -rwxr-xr-x 1 yarn supergroup 93596476 2019-01-11 08:23 
> /yarn-services/3.2.0/service-dep.tar.gz{code}
> It seems an issue of yarn service in 3.2.0 RC1 and I files this Jira to track 
> it.
>  
> And verified that trunk branch doesn't have this issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9190) [Submarine] Submarine job will fail to run as a first job on a new created Hadoop 3.2.0 RC1 cluster

2019-01-15 Thread Billie Rinaldi (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16743247#comment-16743247
 ] 

Billie Rinaldi commented on YARN-9190:
--

The workaround is to run yarn app -enableFastLaunch as a YARN or HDFS admin 
user before running the submarine jar. I have not yet determined why the 
behavior is different in trunk, since the service.libdir problem does not seem 
to be fixed there.

> [Submarine] Submarine job will fail to run as a first job on a new created 
> Hadoop 3.2.0 RC1 cluster
> ---
>
> Key: YARN-9190
> URL: https://issues.apache.org/jira/browse/YARN-9190
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Zhankun Tang
>Assignee: Sunil Govindan
>Priority: Minor
>
> This issue was found when verifying submarine in Hadoop 3.2.0 RC1 planning. 
> The reproduce steps are:
>  # Init a new HDFS and YARN (LinuxContainerExecutor and Docker enabled)
>  # Before run any other yarn service job, use yarn user to submit a submarine 
> job
> The job will fail with below error:
>  
> {code:java}
> LogType:serviceam-err.txt
> LogLastModifiedTime:Thu Jan 10 21:15:23 +0800 2019
> LogLength:86
> LogContents:
> Error: Could not find or load main class 
> org.apache.hadoop.yarn.service.ServiceMaster
> End of LogType:serviceam-err.txt
> {code}
> This seems because the dependencies are not ready as the service client 
> reported:
> {code:java}
> 2019-01-10 21:50:47,380 WARN client.ServiceClient: Property 
> yarn.service.framework.path has a value 
> /yarn-services/3.2.0/service-dep.tar.gz, but is not a valid file
> 2019-01-10 21:50:47,381 INFO client.ServiceClient: Uploading all dependency 
> jars to HDFS. For faster submission of apps, set config property 
> yarn.service.framework.path to the dependency tarball location. Dependency 
> tarball can be uploaded to any HDFS path directly or by using command: yarn 
> app -enableFastLaunch []{code}
>  
> When this error happens, I found that there is no “/yarn-services” directory 
> created in HDFS.
> But after I run “yarn app -launch my-sleeper sleeper”, the “/yarn-services” 
> created in HDFS and then the submarine job can run successfully.
> {code:java}
> yarn@master0-VirtualBox:~/apache-hadoop-install-dir/hadoop-dev-workspace$ 
> hdfs dfs -ls /yarn-services/3.2.0/*
> -rwxr-xr-x 1 yarn supergroup 93596476 2019-01-11 08:23 
> /yarn-services/3.2.0/service-dep.tar.gz{code}
> It seems an issue of yarn service in 3.2.0 RC1 and I files this Jira to track 
> it.
>  
> And verified that trunk branch doesn't have this issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9190) [Submarine] Submarine job will fail to run as a first job on a new created Hadoop 3.2.0 RC1 cluster

2019-01-13 Thread Billie Rinaldi (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16741756#comment-16741756
 ] 

Billie Rinaldi commented on YARN-9190:
--

Hi [~tangzhankun], the yarn script is only setting service.libdir for the 
app|application|applicationattempt|container commands (and also for the 
resourcemanager daemon). It isn't setting the system property for the yarn jar 
command, so that's why it's not working.

> [Submarine] Submarine job will fail to run as a first job on a new created 
> Hadoop 3.2.0 RC1 cluster
> ---
>
> Key: YARN-9190
> URL: https://issues.apache.org/jira/browse/YARN-9190
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Zhankun Tang
>Assignee: Sunil Govindan
>Priority: Minor
>
> This issue was found when verifying submarine in Hadoop 3.2.0 RC1 planning. 
> The reproduce steps are:
>  # Init a new HDFS and YARN (LinuxContainerExecutor and Docker enabled)
>  # Before run any other yarn service job, use yarn user to submit a submarine 
> job
> The job will fail with below error:
>  
> {code:java}
> LogType:serviceam-err.txt
> LogLastModifiedTime:Thu Jan 10 21:15:23 +0800 2019
> LogLength:86
> LogContents:
> Error: Could not find or load main class 
> org.apache.hadoop.yarn.service.ServiceMaster
> End of LogType:serviceam-err.txt
> {code}
> This seems because the dependencies are not ready as the service client 
> reported:
> {code:java}
> 2019-01-10 21:50:47,380 WARN client.ServiceClient: Property 
> yarn.service.framework.path has a value 
> /yarn-services/3.2.0/service-dep.tar.gz, but is not a valid file
> 2019-01-10 21:50:47,381 INFO client.ServiceClient: Uploading all dependency 
> jars to HDFS. For faster submission of apps, set config property 
> yarn.service.framework.path to the dependency tarball location. Dependency 
> tarball can be uploaded to any HDFS path directly or by using command: yarn 
> app -enableFastLaunch []{code}
>  
> When this error happens, I found that there is no “/yarn-services” directory 
> created in HDFS.
> But after I run “yarn app -launch my-sleeper sleeper”, the “/yarn-services” 
> created in HDFS and then the submarine job can run successfully.
> {code:java}
> yarn@master0-VirtualBox:~/apache-hadoop-install-dir/hadoop-dev-workspace$ 
> hdfs dfs -ls /yarn-services/3.2.0/*
> -rwxr-xr-x 1 yarn supergroup 93596476 2019-01-11 08:23 
> /yarn-services/3.2.0/service-dep.tar.gz{code}
> It seems an issue of yarn service in 3.2.0 RC1 and I files this Jira to track 
> it.
>  
> And verified that trunk branch doesn't have this issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9190) [Submarine] Submarine job will fail to run as a first job on a new created Hadoop 3.2.0 RC1 cluster

2019-01-11 Thread Billie Rinaldi (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16740499#comment-16740499
 ] 

Billie Rinaldi commented on YARN-9190:
--

[~tangzhankun] The problem is that the submarine job launch doesn't know where 
the service AM dependency jars are located. The dependencies are specified 
using a service.libdir system property: 
https://github.com/apache/hadoop/blob/release-3.2.0-RC1/hadoop-yarn-project/hadoop-yarn/bin/yarn#L78-L85.
 When running the yarn app -launch command or launching an app through the REST 
API, the system property is already configured, but since submarine is using 
the Java API the system property is not automatically set. An alternative to 
setting the system property is to run yarn app -enableFastLaunch which will 
upload the dependency tarball. The tarball will also be uploaded if you launch 
a job as the yarn user, as you discovered.

To make it so that the dependency tarball doesn't need to be uploaded in 
advance, we should either set the service.libdir property for the submarine job 
launch, or possibly we could consider setting it for all yarn jar commands.

> [Submarine] Submarine job will fail to run as a first job on a new created 
> Hadoop 3.2.0 RC1 cluster
> ---
>
> Key: YARN-9190
> URL: https://issues.apache.org/jira/browse/YARN-9190
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Zhankun Tang
>Assignee: Sunil Govindan
>Priority: Minor
>
> This issue was found when verifying submarine in Hadoop 3.2.0 RC1 planning. 
> The reproduce steps are:
>  # Init a new HDFS and YARN (LinuxContainerExecutor and Docker enabled)
>  # Before run any other yarn service job, use yarn user to submit a submarine 
> job
> The job will fail with below error:
>  
> {code:java}
> LogType:serviceam-err.txt
> LogLastModifiedTime:Thu Jan 10 21:15:23 +0800 2019
> LogLength:86
> LogContents:
> Error: Could not find or load main class 
> org.apache.hadoop.yarn.service.ServiceMaster
> End of LogType:serviceam-err.txt
> {code}
> This seems because the dependencies are not ready as the service client 
> reported:
> {code:java}
> 2019-01-10 21:50:47,380 WARN client.ServiceClient: Property 
> yarn.service.framework.path has a value 
> /yarn-services/3.2.0/service-dep.tar.gz, but is not a valid file
> 2019-01-10 21:50:47,381 INFO client.ServiceClient: Uploading all dependency 
> jars to HDFS. For faster submission of apps, set config property 
> yarn.service.framework.path to the dependency tarball location. Dependency 
> tarball can be uploaded to any HDFS path directly or by using command: yarn 
> app -enableFastLaunch []{code}
>  
> When this error happens, I found that there is no “/yarn-services” directory 
> created in HDFS.
> But after I run “yarn app -launch my-sleeper sleeper”, the “/yarn-services” 
> created in HDFS and then the submarine job can run successfully.
> {code:java}
> yarn@master0-VirtualBox:~/apache-hadoop-install-dir/hadoop-dev-workspace$ 
> hdfs dfs -ls /yarn-services/3.2.0/*
> -rwxr-xr-x 1 yarn supergroup 93596476 2019-01-11 08:23 
> /yarn-services/3.2.0/service-dep.tar.gz{code}
> It seems an issue of yarn service in 3.2.0 RC1 and I files this Jira to track 
> it.
>  
> And verified that trunk branch doesn't have this issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-8447) Support configurable IPC mode in docker runtime

2019-01-11 Thread Billie Rinaldi (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8447?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Billie Rinaldi reassigned YARN-8447:


Assignee: (was: Billie Rinaldi)

> Support configurable IPC mode in docker runtime
> ---
>
> Key: YARN-8447
> URL: https://issues.apache.org/jira/browse/YARN-8447
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Billie Rinaldi
>Priority: Major
>
> For features that require shared memory segments (such as short circuit 
> reads), we should support configuring the IPC namespace in the docker runtime.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-9146) REST API to trigger storing auxiliary manifest file and publish to NMs

2019-01-07 Thread Billie Rinaldi (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9146?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Billie Rinaldi reassigned YARN-9146:


Assignee: Billie Rinaldi

> REST API to trigger storing auxiliary manifest file and publish to NMs
> --
>
> Key: YARN-9146
> URL: https://issues.apache.org/jira/browse/YARN-9146
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Yang
>Assignee: Billie Rinaldi
>Priority: Major
>
> System administrator can change auxiliary service by configuration and 
> restart node managers for the change to take effect.  The new auxiliary 
> service design allows service manifest file to be stored in hdfs or local 
> file system.  This task is to create a set of REST API to update auxiliary 
> service manifest file, and communication protocol to notify NMs to reload 
> auxiliary services.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9146) REST API to trigger storing auxiliary manifest file and publish to NMs

2019-01-07 Thread Billie Rinaldi (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9146?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Billie Rinaldi updated YARN-9146:
-
Attachment: YARN-9146.01.patch

> REST API to trigger storing auxiliary manifest file and publish to NMs
> --
>
> Key: YARN-9146
> URL: https://issues.apache.org/jira/browse/YARN-9146
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Yang
>Assignee: Billie Rinaldi
>Priority: Major
> Attachments: YARN-9146.01.patch
>
>
> System administrator can change auxiliary service by configuration and 
> restart node managers for the change to take effect.  The new auxiliary 
> service design allows service manifest file to be stored in hdfs or local 
> file system.  This task is to create a set of REST API to update auxiliary 
> service manifest file, and communication protocol to notify NMs to reload 
> auxiliary services.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9147) Auxiliary manifest file deleted from HDFS does not trigger service to be removed

2019-01-02 Thread Billie Rinaldi (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9147?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Billie Rinaldi updated YARN-9147:
-
Attachment: YARN-9147.1.patch

> Auxiliary manifest file deleted from HDFS does not trigger service to be 
> removed
> 
>
> Key: YARN-9147
> URL: https://issues.apache.org/jira/browse/YARN-9147
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Yang
>Assignee: Billie Rinaldi
>Priority: Major
> Attachments: YARN-9147.1.patch
>
>
> If system administrator removed auxiliary manifest file from HDFS, this does 
> not trigger services to be removed.  System administrator must write:
> {code}
> {
>   "services": [
>   ]
> }
> {code}
> To remove all auxiliary services.  It might be a better experience to remove 
> auxiliary services, if the file has been removed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9147) Auxiliary manifest file deleted from HDFS does not trigger service to be removed

2019-01-02 Thread Billie Rinaldi (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9147?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16732234#comment-16732234
 ] 

Billie Rinaldi commented on YARN-9147:
--

Good idea. Thanks, [~eyang].

> Auxiliary manifest file deleted from HDFS does not trigger service to be 
> removed
> 
>
> Key: YARN-9147
> URL: https://issues.apache.org/jira/browse/YARN-9147
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Yang
>Assignee: Billie Rinaldi
>Priority: Major
>
> If system administrator removed auxiliary manifest file from HDFS, this does 
> not trigger services to be removed.  System administrator must write:
> {code}
> {
>   "services": [
>   ]
> }
> {code}
> To remove all auxiliary services.  It might be a better experience to remove 
> auxiliary services, if the file has been removed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-9147) Auxiliary manifest file deleted from HDFS does not trigger service to be removed

2019-01-02 Thread Billie Rinaldi (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9147?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Billie Rinaldi reassigned YARN-9147:


Assignee: Billie Rinaldi

> Auxiliary manifest file deleted from HDFS does not trigger service to be 
> removed
> 
>
> Key: YARN-9147
> URL: https://issues.apache.org/jira/browse/YARN-9147
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Yang
>Assignee: Billie Rinaldi
>Priority: Major
>
> If system administrator removed auxiliary manifest file from HDFS, this does 
> not trigger services to be removed.  System administrator must write:
> {code}
> {
>   "services": [
>   ]
> }
> {code}
> To remove all auxiliary services.  It might be a better experience to remove 
> auxiliary services, if the file has been removed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9132) Add file permission check for auxiliary services manifest file

2018-12-20 Thread Billie Rinaldi (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Billie Rinaldi updated YARN-9132:
-
Attachment: YARN-9132.3.patch

> Add file permission check for auxiliary services manifest file
> --
>
> Key: YARN-9132
> URL: https://issues.apache.org/jira/browse/YARN-9132
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Yang
>Assignee: Billie Rinaldi
>Priority: Major
> Attachments: YARN-9132.1.patch, YARN-9132.2.patch, YARN-9132.3.patch
>
>
> The manifest file in HDFS must be owned by YARN admin or YARN service user 
> only.  This check helps to prevent loading of malware into node manager JVM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9132) Add file permission check for auxiliary services manifest file

2018-12-20 Thread Billie Rinaldi (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9132?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16726284#comment-16726284
 ] 

Billie Rinaldi commented on YARN-9132:
--

Patch 3 performs recursive check for group and others write permission.

> Add file permission check for auxiliary services manifest file
> --
>
> Key: YARN-9132
> URL: https://issues.apache.org/jira/browse/YARN-9132
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Yang
>Assignee: Billie Rinaldi
>Priority: Major
> Attachments: YARN-9132.1.patch, YARN-9132.2.patch, YARN-9132.3.patch
>
>
> The manifest file in HDFS must be owned by YARN admin or YARN service user 
> only.  This check helps to prevent loading of malware into node manager JVM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9131) Document usage of Dynamic auxiliary services

2018-12-20 Thread Billie Rinaldi (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16726277#comment-16726277
 ] 

Billie Rinaldi commented on YARN-9131:
--

Patch 5 is documentation only. I moved the formatting fixes to YARN-9152.

> Document usage of Dynamic auxiliary services
> 
>
> Key: YARN-9131
> URL: https://issues.apache.org/jira/browse/YARN-9131
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Yang
>Assignee: Billie Rinaldi
>Priority: Major
> Attachments: YARN-9131.1.patch, YARN-9131.2.patch, YARN-9131.3.patch, 
> YARN-9131.4.patch, YARN-9131.5.patch
>
>
> This is a follow up issue to document YARN-9075 for admin to control which 
> aux service to add or remove.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9131) Document usage of Dynamic auxiliary services

2018-12-20 Thread Billie Rinaldi (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9131?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Billie Rinaldi updated YARN-9131:
-
Attachment: YARN-9131.5.patch

> Document usage of Dynamic auxiliary services
> 
>
> Key: YARN-9131
> URL: https://issues.apache.org/jira/browse/YARN-9131
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Yang
>Assignee: Billie Rinaldi
>Priority: Major
> Attachments: YARN-9131.1.patch, YARN-9131.2.patch, YARN-9131.3.patch, 
> YARN-9131.4.patch, YARN-9131.5.patch
>
>
> This is a follow up issue to document YARN-9075 for admin to control which 
> aux service to add or remove.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9152) Auxiliary service REST API query does not return running services

2018-12-20 Thread Billie Rinaldi (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9152?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16726275#comment-16726275
 ] 

Billie Rinaldi commented on YARN-9152:
--

It looks like the services were empty because there was an admin user check. 
Aux service name, version, and start time could be viewed by non-admin users, 
so I have attached a patch that removes that check. I also noticed an issue 
with json serialization for the auxiliaryservices endpoint, so I fixed that as 
well.

> Auxiliary service REST API query does not return running services
> -
>
> Key: YARN-9152
> URL: https://issues.apache.org/jira/browse/YARN-9152
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Yang
>Assignee: Billie Rinaldi
>Priority: Major
> Attachments: YARN-9152.1.patch
>
>
> Auxiliary service is configured with:
> {code}
> {
>   "services": [
> {
>   "name": "mapreduce_shuffle",
>   "version": "2",
>   "configuration": {
> "properties": {
>   "class.name": "org.apache.hadoop.mapred.ShuffleHandler",
>   "mapreduce.shuffle.transfer.buffer.size": "102400",
>   "mapreduce.shuffle.port": "13563"
> }
>   }
> }
>   ]
> }
> {code}
> Node manager log shows the service is registered:
> {code}
> 2018-12-19 22:38:57,466 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: 
> Reading auxiliary services manifest hdfs:/tmp/aux.json
> 2018-12-19 22:38:57,827 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: 
> Initialized auxiliary service mapreduce_shuffle
> 2018-12-19 22:38:57,828 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: 
> Adding auxiliary service mapreduce_shuffle version 2
> {code}
> REST API query shows:
> {code}
> $ curl --negotiate -u :  
> http://eyang-3.openstacklocal:8042/ws/v1/node/auxiliaryservices
> {"services":{}}
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9152) Auxiliary service REST API query does not return running services

2018-12-20 Thread Billie Rinaldi (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9152?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Billie Rinaldi updated YARN-9152:
-
Attachment: YARN-9152.1.patch

> Auxiliary service REST API query does not return running services
> -
>
> Key: YARN-9152
> URL: https://issues.apache.org/jira/browse/YARN-9152
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Yang
>Assignee: Billie Rinaldi
>Priority: Major
> Attachments: YARN-9152.1.patch
>
>
> Auxiliary service is configured with:
> {code}
> {
>   "services": [
> {
>   "name": "mapreduce_shuffle",
>   "version": "2",
>   "configuration": {
> "properties": {
>   "class.name": "org.apache.hadoop.mapred.ShuffleHandler",
>   "mapreduce.shuffle.transfer.buffer.size": "102400",
>   "mapreduce.shuffle.port": "13563"
> }
>   }
> }
>   ]
> }
> {code}
> Node manager log shows the service is registered:
> {code}
> 2018-12-19 22:38:57,466 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: 
> Reading auxiliary services manifest hdfs:/tmp/aux.json
> 2018-12-19 22:38:57,827 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: 
> Initialized auxiliary service mapreduce_shuffle
> 2018-12-19 22:38:57,828 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: 
> Adding auxiliary service mapreduce_shuffle version 2
> {code}
> REST API query shows:
> {code}
> $ curl --negotiate -u :  
> http://eyang-3.openstacklocal:8042/ws/v1/node/auxiliaryservices
> {"services":{}}
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9129) Ensure flush after printing to log plus additional cleanup

2018-12-19 Thread Billie Rinaldi (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16725469#comment-16725469
 ] 

Billie Rinaldi commented on YARN-9129:
--

+1 for patch 3. Thanks, [~eyang]!

> Ensure flush after printing to log plus additional cleanup
> --
>
> Key: YARN-9129
> URL: https://issues.apache.org/jira/browse/YARN-9129
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Billie Rinaldi
>Assignee: Eric Yang
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: YARN-9129.001.patch, YARN-9129.002.patch, 
> YARN-9129.003.patch
>
>
> Following up on findings in YARN-8962, I noticed the following issues in 
> container-executor and main.c:
> - There seem to be some vars that are not cleaned up in container_executor:
> In run_docker else: free docker_binary
> In exec_container:
>   before return INVALID_COMMAND_FILE: free docker_binary
>   3x return DOCKER_EXEC_FAILED: set exit code and goto cleanup instead
>   cleanup needed before exit calls?
> - In YARN-8777 we added several fprintf(stderr calls, but the convention in 
> container-executor.c appears to be fprintf(ERRORFILE followed by 
> fflush(ERRORFILE).
> - There are leaks in TestDockerUtil_test_add_ports_mapping_to_command_Test.
> - There are additional places where flush is not performed after writing to 
> stderr, including main.c display_feature_disabled_message. This can result in 
> the client not receiving the error message if the connection is closed too 
> quickly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9129) Ensure flush after printing to log plus additional cleanup

2018-12-19 Thread Billie Rinaldi (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Billie Rinaldi updated YARN-9129:
-
Summary: Ensure flush after printing to log plus additional cleanup  (was: 
Ensure flush after printing to stderr plus additional cleanup)

> Ensure flush after printing to log plus additional cleanup
> --
>
> Key: YARN-9129
> URL: https://issues.apache.org/jira/browse/YARN-9129
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Billie Rinaldi
>Assignee: Eric Yang
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: YARN-9129.001.patch, YARN-9129.002.patch, 
> YARN-9129.003.patch
>
>
> Following up on findings in YARN-8962, I noticed the following issues in 
> container-executor and main.c:
> - There seem to be some vars that are not cleaned up in container_executor:
> In run_docker else: free docker_binary
> In exec_container:
>   before return INVALID_COMMAND_FILE: free docker_binary
>   3x return DOCKER_EXEC_FAILED: set exit code and goto cleanup instead
>   cleanup needed before exit calls?
> - In YARN-8777 we added several fprintf(stderr calls, but the convention in 
> container-executor.c appears to be fprintf(ERRORFILE followed by 
> fflush(ERRORFILE).
> - There are leaks in TestDockerUtil_test_add_ports_mapping_to_command_Test.
> - There are additional places where flush is not performed after writing to 
> stderr, including main.c display_feature_disabled_message. This can result in 
> the client not receiving the error message if the connection is closed too 
> quickly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9132) Add file permission check for auxiliary services manifest file

2018-12-19 Thread Billie Rinaldi (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9132?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16725160#comment-16725160
 ] 

Billie Rinaldi commented on YARN-9132:
--

There's a ticket open already about the flaky test failure. I've rerun the 
precommit as well.

> Add file permission check for auxiliary services manifest file
> --
>
> Key: YARN-9132
> URL: https://issues.apache.org/jira/browse/YARN-9132
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Yang
>Assignee: Billie Rinaldi
>Priority: Major
> Attachments: YARN-9132.1.patch, YARN-9132.2.patch
>
>
> The manifest file in HDFS must be owned by YARN admin or YARN service user 
> only.  This check helps to prevent loading of malware into node manager JVM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9131) Document usage of Dynamic auxiliary services

2018-12-19 Thread Billie Rinaldi (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9131?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Billie Rinaldi updated YARN-9131:
-
Attachment: YARN-9131.4.patch

> Document usage of Dynamic auxiliary services
> 
>
> Key: YARN-9131
> URL: https://issues.apache.org/jira/browse/YARN-9131
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Yang
>Assignee: Billie Rinaldi
>Priority: Major
> Attachments: YARN-9131.1.patch, YARN-9131.2.patch, YARN-9131.3.patch, 
> YARN-9131.4.patch
>
>
> This is a follow up issue to document YARN-9075 for admin to control which 
> aux service to add or remove.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9075) Dynamically add or remove auxiliary services

2018-12-18 Thread Billie Rinaldi (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16724676#comment-16724676
 ] 

Billie Rinaldi commented on YARN-9075:
--

Thanks for checking, [~cheersyang]. Yes, this bug you found was giving me test 
case failures for my patch and so I fixed it here.

> Dynamically add or remove auxiliary services
> 
>
> Key: YARN-9075
> URL: https://issues.apache.org/jira/browse/YARN-9075
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Reporter: Billie Rinaldi
>Assignee: Billie Rinaldi
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: YARN-9075.001.patch, YARN-9075.002.patch, 
> YARN-9075.003.patch, YARN-9075.004.patch, YARN-9075.005.patch, 
> YARN-9075_Dynamic_Aux_Services_V1.pdf
>
>
> It would be useful to support adding, removing, or updating auxiliary 
> services without requiring a restart of NMs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9132) Add file permission check for auxiliary services manifest file

2018-12-18 Thread Billie Rinaldi (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Billie Rinaldi updated YARN-9132:
-
Attachment: YARN-9132.2.patch

> Add file permission check for auxiliary services manifest file
> --
>
> Key: YARN-9132
> URL: https://issues.apache.org/jira/browse/YARN-9132
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Yang
>Assignee: Billie Rinaldi
>Priority: Major
> Attachments: YARN-9132.1.patch, YARN-9132.2.patch
>
>
> The manifest file in HDFS must be owned by YARN admin or YARN service user 
> only.  This check helps to prevent loading of malware into node manager JVM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9131) Document usage of Dynamic auxiliary services

2018-12-18 Thread Billie Rinaldi (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16724515#comment-16724515
 ] 

Billie Rinaldi commented on YARN-9131:
--

Thanks [~eyang]. I fixed the bracket in patch 3 and also added documentation 
for the auxiliary services GET endpoint. I realized the formatting was a bit 
off for the json, so I addressed that as well in this patch.

> Document usage of Dynamic auxiliary services
> 
>
> Key: YARN-9131
> URL: https://issues.apache.org/jira/browse/YARN-9131
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Yang
>Assignee: Billie Rinaldi
>Priority: Major
> Attachments: YARN-9131.1.patch, YARN-9131.2.patch, YARN-9131.3.patch
>
>
> This is a follow up issue to document YARN-9075 for admin to control which 
> aux service to add or remove.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9131) Document usage of Dynamic auxiliary services

2018-12-18 Thread Billie Rinaldi (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9131?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Billie Rinaldi updated YARN-9131:
-
Attachment: YARN-9131.3.patch

> Document usage of Dynamic auxiliary services
> 
>
> Key: YARN-9131
> URL: https://issues.apache.org/jira/browse/YARN-9131
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Yang
>Assignee: Billie Rinaldi
>Priority: Major
> Attachments: YARN-9131.1.patch, YARN-9131.2.patch, YARN-9131.3.patch
>
>
> This is a follow up issue to document YARN-9075 for admin to control which 
> aux service to add or remove.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9117) Container shell does not work when using yarn.nodemanager.linux-container-executor.nonsecure-mode.local-user is set

2018-12-18 Thread Billie Rinaldi (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16724455#comment-16724455
 ] 

Billie Rinaldi commented on YARN-9117:
--

+1 for patch 1. This prevents the shell from being used in an insecure setup.

> Container shell does not work when using 
> yarn.nodemanager.linux-container-executor.nonsecure-mode.local-user is set
> ---
>
> Key: YARN-9117
> URL: https://issues.apache.org/jira/browse/YARN-9117
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 3.3.0
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: YARN-9117.001.patch
>
>
> If YARN is configured with 
> yarn.nodemanager.linux-container-executor.nonsecure-mode.local-user to 
> restrict YARN workload to run as a specific user only.  Container shell does 
> not support this configuration because the workdir directory is owned by 
> local-user.  The container shell is intended to launch a bash process owned 
> by the application owner.  When bash process owner and current working 
> directory are mismatched.  The child process will terminate immediately due 
> to no permission to WORKDIR.  It is probably best to report this 
> configuration as not supported rather than allowing application owner to gain 
> all privileges of local-user.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9075) Dynamically add or remove auxiliary services

2018-12-18 Thread Billie Rinaldi (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16724411#comment-16724411
 ] 

Billie Rinaldi commented on YARN-9075:
--

Thanks for the review, [~eyang]. I uploaded patch 5 that fixes the checkstyle 
errors. With the current patch, the NM should detect manifest file changes in 2 
minutes. (The NM only considers service name and version when determining if 
its current version is up to date, so if there are only changes to the service 
configuration it will not reload the service.) If a service is removed from the 
manifest, the NM will stop and remove the running aux service. I think we can 
use YARN-9146 to track improvements in telling the NM when to reload services.

> Dynamically add or remove auxiliary services
> 
>
> Key: YARN-9075
> URL: https://issues.apache.org/jira/browse/YARN-9075
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Reporter: Billie Rinaldi
>Assignee: Billie Rinaldi
>Priority: Major
> Attachments: YARN-9075.001.patch, YARN-9075.002.patch, 
> YARN-9075.003.patch, YARN-9075.004.patch, YARN-9075.005.patch, 
> YARN-9075_Dynamic_Aux_Services_V1.pdf
>
>
> It would be useful to support adding, removing, or updating auxiliary 
> services without requiring a restart of NMs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9075) Dynamically add or remove auxiliary services

2018-12-18 Thread Billie Rinaldi (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9075?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Billie Rinaldi updated YARN-9075:
-
Attachment: YARN-9075.005.patch

> Dynamically add or remove auxiliary services
> 
>
> Key: YARN-9075
> URL: https://issues.apache.org/jira/browse/YARN-9075
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Reporter: Billie Rinaldi
>Assignee: Billie Rinaldi
>Priority: Major
> Attachments: YARN-9075.001.patch, YARN-9075.002.patch, 
> YARN-9075.003.patch, YARN-9075.004.patch, YARN-9075.005.patch, 
> YARN-9075_Dynamic_Aux_Services_V1.pdf
>
>
> It would be useful to support adding, removing, or updating auxiliary 
> services without requiring a restart of NMs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9131) Document usage of Dynamic auxiliary services

2018-12-18 Thread Billie Rinaldi (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9131?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Billie Rinaldi updated YARN-9131:
-
Issue Type: Sub-task  (was: Bug)
Parent: YARN-9145

> Document usage of Dynamic auxiliary services
> 
>
> Key: YARN-9131
> URL: https://issues.apache.org/jira/browse/YARN-9131
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Yang
>Assignee: Billie Rinaldi
>Priority: Major
> Attachments: YARN-9131.1.patch, YARN-9131.2.patch
>
>
> This is a follow up issue to document YARN-9075 for admin to control which 
> aux service to add or remove.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9132) Add file permission check for auxiliary services manifest file

2018-12-18 Thread Billie Rinaldi (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Billie Rinaldi updated YARN-9132:
-
Issue Type: Sub-task  (was: Bug)
Parent: YARN-9145

> Add file permission check for auxiliary services manifest file
> --
>
> Key: YARN-9132
> URL: https://issues.apache.org/jira/browse/YARN-9132
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Yang
>Assignee: Billie Rinaldi
>Priority: Major
> Attachments: YARN-9132.1.patch
>
>
> The manifest file in HDFS must be owned by YARN admin or YARN service user 
> only.  This check helps to prevent loading of malware into node manager JVM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9075) Dynamically add or remove auxiliary services

2018-12-18 Thread Billie Rinaldi (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9075?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Billie Rinaldi updated YARN-9075:
-
Issue Type: Sub-task  (was: Improvement)
Parent: YARN-9145

> Dynamically add or remove auxiliary services
> 
>
> Key: YARN-9075
> URL: https://issues.apache.org/jira/browse/YARN-9075
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Reporter: Billie Rinaldi
>Assignee: Billie Rinaldi
>Priority: Major
> Attachments: YARN-9075.001.patch, YARN-9075.002.patch, 
> YARN-9075.003.patch, YARN-9075.004.patch, 
> YARN-9075_Dynamic_Aux_Services_V1.pdf
>
>
> It would be useful to support adding, removing, or updating auxiliary 
> services without requiring a restart of NMs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-9145) [Umbrella] Dynamically add or remove auxiliary services

2018-12-18 Thread Billie Rinaldi (JIRA)
Billie Rinaldi created YARN-9145:


 Summary: [Umbrella] Dynamically add or remove auxiliary services
 Key: YARN-9145
 URL: https://issues.apache.org/jira/browse/YARN-9145
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: nodemanager
Reporter: Billie Rinaldi
Assignee: Billie Rinaldi


Umbrella to track tasks supporting adding, removing, or updating auxiliary 
services without NM restart.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9072) Web browser close without proper exit can leak shell process

2018-12-18 Thread Billie Rinaldi (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16724242#comment-16724242
 ] 

Billie Rinaldi commented on YARN-9072:
--

+1 for patch 5. Thanks, [~eyang]!

> Web browser close without proper exit can leak shell process
> 
>
> Key: YARN-9072
> URL: https://issues.apache.org/jira/browse/YARN-9072
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 3.3.0
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: YARN-9072.001.patch, YARN-9072.002.patch, 
> YARN-9072.003.patch, YARN-9072.004.patch, YARN-9072.005.patch
>
>
> If web browser is closed without typing exit in container shell, it will 
> leave bash process in the docker container.  It would be nice to detect the 
> websocket is closed, and terminate the bash process from docker container.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9132) Add file permission check for auxiliary services manifest file

2018-12-17 Thread Billie Rinaldi (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Billie Rinaldi updated YARN-9132:
-
Attachment: YARN-9132.1.patch

> Add file permission check for auxiliary services manifest file
> --
>
> Key: YARN-9132
> URL: https://issues.apache.org/jira/browse/YARN-9132
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Eric Yang
>Assignee: Billie Rinaldi
>Priority: Major
> Attachments: YARN-9132.1.patch
>
>
> The manifest file in HDFS must be owned by YARN admin or YARN service user 
> only.  This check helps to prevent loading of malware into node manager JVM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-9132) Add file permission check for auxiliary services manifest file

2018-12-17 Thread Billie Rinaldi (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Billie Rinaldi reassigned YARN-9132:


Assignee: Billie Rinaldi

> Add file permission check for auxiliary services manifest file
> --
>
> Key: YARN-9132
> URL: https://issues.apache.org/jira/browse/YARN-9132
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Eric Yang
>Assignee: Billie Rinaldi
>Priority: Major
>
> The manifest file in HDFS must be owned by YARN admin or YARN service user 
> only.  This check helps to prevent loading of malware into node manager JVM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9131) Document usage of Dynamic auxiliary services

2018-12-17 Thread Billie Rinaldi (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9131?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Billie Rinaldi updated YARN-9131:
-
Attachment: YARN-9131.2.patch

> Document usage of Dynamic auxiliary services
> 
>
> Key: YARN-9131
> URL: https://issues.apache.org/jira/browse/YARN-9131
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Eric Yang
>Assignee: Billie Rinaldi
>Priority: Major
> Attachments: YARN-9131.1.patch, YARN-9131.2.patch
>
>
> This is a follow up issue to document YARN-9075 for admin to control which 
> aux service to add or remove.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9129) Ensure flush after printing to stderr plus additional cleanup

2018-12-17 Thread Billie Rinaldi (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16723132#comment-16723132
 ] 

Billie Rinaldi commented on YARN-9129:
--

bq. stderr is unbuffered, so the fflush calls seem unnecessary to me.
Interesting, perhaps the lack of message in the client is for a different 
reason. I'll experiment with adding flush in display_feature_disabled_message 
to find out whether it changes the behavior or not.

> Ensure flush after printing to stderr plus additional cleanup
> -
>
> Key: YARN-9129
> URL: https://issues.apache.org/jira/browse/YARN-9129
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Billie Rinaldi
>Priority: Major
> Fix For: 3.3.0
>
>
> Following up on findings in YARN-8962, I noticed the following issues in 
> container-executor and main.c:
> - There seem to be some vars that are not cleaned up in container_executor:
> In run_docker else: free docker_binary
> In exec_container:
>   before return INVALID_COMMAND_FILE: free docker_binary
>   3x return DOCKER_EXEC_FAILED: set exit code and goto cleanup instead
>   cleanup needed before exit calls?
> - In YARN-8777 we added several fprintf(stderr calls, but the convention in 
> container-executor.c appears to be fprintf(ERRORFILE followed by 
> fflush(ERRORFILE).
> - There are leaks in TestDockerUtil_test_add_ports_mapping_to_command_Test.
> - There are additional places where flush is not performed after writing to 
> stderr, including main.c display_feature_disabled_message. This can result in 
> the client not receiving the error message if the connection is closed too 
> quickly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-9131) Document usage of Dynamic auxiliary services

2018-12-17 Thread Billie Rinaldi (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9131?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Billie Rinaldi reassigned YARN-9131:


Assignee: Billie Rinaldi

> Document usage of Dynamic auxiliary services
> 
>
> Key: YARN-9131
> URL: https://issues.apache.org/jira/browse/YARN-9131
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Eric Yang
>Assignee: Billie Rinaldi
>Priority: Major
> Attachments: YARN-9131.1.patch
>
>
> This is a follow up issue to document YARN-9075 for admin to control which 
> aux service to add or remove.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9131) Document usage of Dynamic auxiliary services

2018-12-17 Thread Billie Rinaldi (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9131?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Billie Rinaldi updated YARN-9131:
-
Attachment: YARN-9131.1.patch

> Document usage of Dynamic auxiliary services
> 
>
> Key: YARN-9131
> URL: https://issues.apache.org/jira/browse/YARN-9131
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Eric Yang
>Priority: Major
> Attachments: YARN-9131.1.patch
>
>
> This is a follow up issue to document YARN-9075 for admin to control which 
> aux service to add or remove.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9072) Web browser close without proper exit can leak shell process

2018-12-14 Thread Billie Rinaldi (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16721991#comment-16721991
 ] 

Billie Rinaldi commented on YARN-9072:
--

I am in favor of the latest patch, pending precommit build. I tested and 
verified that the shell processes are not leaked on browser close or permission 
denied.

> Web browser close without proper exit can leak shell process
> 
>
> Key: YARN-9072
> URL: https://issues.apache.org/jira/browse/YARN-9072
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 3.3.0
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: YARN-9072.001.patch, YARN-9072.002.patch, 
> YARN-9072.003.patch
>
>
> If web browser is closed without typing exit in container shell, it will 
> leave bash process in the docker container.  It would be nice to detect the 
> websocket is closed, and terminate the bash process from docker container.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9091) Improve terminal message when connection is refused

2018-12-14 Thread Billie Rinaldi (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16721989#comment-16721989
 ] 

Billie Rinaldi commented on YARN-9091:
--

Thanks, [~eyang]!

> Improve terminal message when connection is refused
> ---
>
> Key: YARN-9091
> URL: https://issues.apache.org/jira/browse/YARN-9091
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 3.3.0
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: YARN-9091.001.patch
>
>
> If a user does not have proper access to login to a container.  UI2 version 
> of Terminal will not display any message.  It would be nice to report back 
> the connection has been refused.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9091) Improve terminal message when connection is refused

2018-12-14 Thread Billie Rinaldi (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16721987#comment-16721987
 ] 

Billie Rinaldi commented on YARN-9091:
--

+1 for patch 1. I verified that the UI container terminal receives appropriate 
messages based on the exit codes.

> Improve terminal message when connection is refused
> ---
>
> Key: YARN-9091
> URL: https://issues.apache.org/jira/browse/YARN-9091
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 3.3.0
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: YARN-9091.001.patch
>
>
> If a user does not have proper access to login to a container.  UI2 version 
> of Terminal will not display any message.  It would be nice to report back 
> the connection has been refused.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (YARN-9091) Improve terminal message when connection is refused

2018-12-14 Thread Billie Rinaldi (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9091?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Billie Rinaldi updated YARN-9091:
-
Comment: was deleted

(was: Thanks, [~eyang]!)

> Improve terminal message when connection is refused
> ---
>
> Key: YARN-9091
> URL: https://issues.apache.org/jira/browse/YARN-9091
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 3.3.0
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: YARN-9091.001.patch
>
>
> If a user does not have proper access to login to a container.  UI2 version 
> of Terminal will not display any message.  It would be nice to report back 
> the connection has been refused.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9091) Improve terminal message when connection is refused

2018-12-14 Thread Billie Rinaldi (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16721988#comment-16721988
 ] 

Billie Rinaldi commented on YARN-9091:
--

Thanks, [~eyang]!

> Improve terminal message when connection is refused
> ---
>
> Key: YARN-9091
> URL: https://issues.apache.org/jira/browse/YARN-9091
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 3.3.0
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: YARN-9091.001.patch
>
>
> If a user does not have proper access to login to a container.  UI2 version 
> of Terminal will not display any message.  It would be nice to report back 
> the connection has been refused.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



  1   2   3   4   5   6   7   8   9   10   >