[jira] [Commented] (YARN-6287) RMCriticalThreadUncaughtExceptionHandler.rmContext should be final

2017-03-07 Thread Corey Barker (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15900066#comment-15900066
 ] 

Corey Barker commented on YARN-6287:


Thanks for the guidance, [~templedf]. Really appreciate the clear path to get 
it done.

> RMCriticalThreadUncaughtExceptionHandler.rmContext should be final
> --
>
> Key: YARN-6287
> URL: https://issues.apache.org/jira/browse/YARN-6287
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Affects Versions: 3.0.0-alpha2
>Reporter: Daniel Templeton
>Assignee: Corey Barker
>Priority: Minor
>  Labels: newbie
> Fix For: 2.9.0, 3.0.0-alpha3
>
> Attachments: YARN-6287.001.patch
>
>
> {code}
>   private RMContext rmContext;
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6281) Cleanup when AMRMProxy fails to initialize a new interceptor chain

2017-03-07 Thread Subru Krishnan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15900116#comment-15900116
 ] 

Subru Krishnan commented on YARN-6281:
--

+1 on the latest patch. Thanks [~botong]. I'll be committing this shortly.

> Cleanup when AMRMProxy fails to initialize a new interceptor chain
> --
>
> Key: YARN-6281
> URL: https://issues.apache.org/jira/browse/YARN-6281
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Botong Huang
>Assignee: Botong Huang
>Priority: Minor
> Attachments: YARN-6281.v1.patch, YARN-6281.v2.patch, 
> YARN-6281.v3.patch, YARN-6281.v4.patch
>
>
> When a app starts, AMRMProxy.initializePipeline creates a new Interceptor 
> chain and add it to its pipeline mapping. Then it initializes the chain and 
> return. The problem is that when the chain initialization throws (e.g. 
> because of configuration error, interceptor class not found etc.), the chain 
> is not removed from AMRMProxy's pipeline mapping. 
> This patch also contains misc log message fixes in AMRMProxy. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6302) Fail the node, if Linux Container Executor is not configured properly

2017-03-07 Thread Miklos Szegedi (JIRA)
Miklos Szegedi created YARN-6302:


 Summary: Fail the node, if Linux Container Executor is not 
configured properly
 Key: YARN-6302
 URL: https://issues.apache.org/jira/browse/YARN-6302
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Miklos Szegedi
Assignee: Miklos Szegedi
Priority: Minor


We have a cluster that has one node with misconfigured Linux Container 
Executor. Every time an AM or regular container is launched on the cluster, it 
will fail. The node will still have resources available, so it keeps failing 
apps until the administrator notices the issue and decommissions the node. AM 
Blacklisting only helps, if the application is already running.

As a possible improvement, when the LCE is used on the cluster and a NM gets 
certain errors back from the LCE, like error 24 configuration not found, we 
should not try to allocate anything on the node anymore or shut down the node 
entirely. That kind of problem normally does not fix itself and it means that 
nothing can really run on that node.

{code}
Application application_1488920587909_0010 failed 2 times due to AM Container 
for appattempt_1488920587909_0010_02 exited with exitCode: -1000
Failing this attempt.Diagnostics: Application application_1488920587909_0010 
initialization failed (exitCode=24) with output:
For more detailed output, check the application tracking page: 
http://node-1.domain.com:8088/cluster/app/application_1488920587909_0010 Then 
click on links to logs of each attempt.
. Failing the application.
{code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6303) hadoop-mapreduce-client-jobclient.jar sets a main class that isn't in the JAR

2017-03-07 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15900174#comment-15900174
 ] 

Daniel Templeton commented on YARN-6303:


Of course, the down side of fixing this issue is that it will break workflows 
for people who were taking advantage of the issue.

> hadoop-mapreduce-client-jobclient.jar sets a main class that isn't in the JAR
> -
>
> Key: YARN-6303
> URL: https://issues.apache.org/jira/browse/YARN-6303
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: client
>Affects Versions: 3.0.0-alpha2
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Minor
> Attachments: YARN-6303.001.patch
>
>
> The manifest for hadoop-mapreduce-client-jobclient.jar points to 
> {{org.apache.hadoop.test.MapredTestDriver}}, which is in the test JAR.  
> Without the test JAR in the class path, running the jobclient JAR will fail 
> with a class not found exception.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6303) hadoop-mapreduce-client-jobclient.jar sets a main class that isn't in the JAR

2017-03-07 Thread Daniel Templeton (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton updated YARN-6303:
---
Attachment: YARN-6303.001.patch

> hadoop-mapreduce-client-jobclient.jar sets a main class that isn't in the JAR
> -
>
> Key: YARN-6303
> URL: https://issues.apache.org/jira/browse/YARN-6303
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: client
>Affects Versions: 3.0.0-alpha2
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Minor
> Attachments: YARN-6303.001.patch
>
>
> The manifest for hadoop-mapreduce-client-jobclient.jar points to 
> {{org.apache.hadoop.test.MapredTestDriver}}, which is in the test JAR.  
> Without the test JAR in the class path, running the jobclient JAR will fail 
> with a class not found exception.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6275) Fail to show real-time tracking charts in SLS

2017-03-07 Thread Robert Kanter (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15900205#comment-15900205
 ] 

Robert Kanter commented on YARN-6275:
-

+1
will commit shortly

> Fail to show real-time tracking charts in SLS
> -
>
> Key: YARN-6275
> URL: https://issues.apache.org/jira/browse/YARN-6275
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler-load-simulator
>Affects Versions: 3.0.0-alpha2
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Attachments: YARN-6275.001.patch, YARN-6275.002.patch, 
> YARN-6275.003.patch, YARN-6275.004.patch
>
>
> # Not put {{html}} directory under the current working directory.
> # There is a bug in Class {{SLSWebApp}}, here is the stack trace:
> {code}
> java.lang.NullPointerException
>   at 
> org.eclipse.jetty.server.handler.ResourceHandler.handle(ResourceHandler.java:499)
>   at org.apache.hadoop.yarn.sls.web.SLSWebApp$1.handle(SLSWebApp.java:152)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
>   at org.eclipse.jetty.server.Server.handle(Server.java:524)
>   at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:319)
>   at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:253)
>   at 
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
>   at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
>   at 
> org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
>   at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
>   at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
>   at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)
>   at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6287) RMCriticalThreadUncaughtExceptionHandler.rmContext should be final

2017-03-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15900092#comment-15900092
 ] 

Hudson commented on YARN-6287:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #11366 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/11366/])
YARN-6287. RMCriticalThreadUncaughtExceptionHandler.rmContext should be 
(templedf: rev e0c239cdbda336e09a35d112d451c2e17d74a3fc)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMCriticalThreadUncaughtExceptionHandler.java


> RMCriticalThreadUncaughtExceptionHandler.rmContext should be final
> --
>
> Key: YARN-6287
> URL: https://issues.apache.org/jira/browse/YARN-6287
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Affects Versions: 3.0.0-alpha2
>Reporter: Daniel Templeton
>Assignee: Corey Barker
>Priority: Minor
>  Labels: newbie
> Fix For: 2.9.0, 3.0.0-alpha3
>
> Attachments: YARN-6287.001.patch
>
>
> {code}
>   private RMContext rmContext;
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6234) Support multiple attempts on the node when AMRMProxy is enabled

2017-03-07 Thread Subru Krishnan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15900109#comment-15900109
 ] 

Subru Krishnan commented on YARN-6234:
--

[~giovanni.fumarola], I think we should reattach the new attempt to the same 
pipeline instead of shutting it down as otherwise all the spanned containers in 
the secondary _subclusters_ will now be killed.

> Support multiple attempts on the node when AMRMProxy is enabled
> ---
>
> Key: YARN-6234
> URL: https://issues.apache.org/jira/browse/YARN-6234
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: amrmproxy, federation, nodemanager
>Affects Versions: 2.8.0, 3.0.0-alpha1
>Reporter: Subru Krishnan
>Assignee: Giovanni Matteo Fumarola
> Attachments: YARN-6234-YARN-2915.v1.patch
>
>
> Currently {{AMRMProxy}} initializes an interceptor chain pipeline for every 
> active AM in the node but it doesn't clean up & reinitialize correctly if 
> there's a second attempt for any AM in the same node. This jira is to track 
> the changes required to support multiple attempts on the node when AMRMProxy 
> is enabled.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6303) hadoop-mapreduce-client-jobclient.jar sets a main class that isn't in the JAR

2017-03-07 Thread Daniel Templeton (JIRA)
Daniel Templeton created YARN-6303:
--

 Summary: hadoop-mapreduce-client-jobclient.jar sets a main class 
that isn't in the JAR
 Key: YARN-6303
 URL: https://issues.apache.org/jira/browse/YARN-6303
 Project: Hadoop YARN
  Issue Type: Bug
  Components: client
Affects Versions: 3.0.0-alpha2
Reporter: Daniel Templeton
Assignee: Daniel Templeton
Priority: Minor


The manifest for hadoop-mapreduce-client-jobclient.jar points to 
{{org.apache.hadoop.test.MapredTestDriver}}, which is in the test JAR.  Without 
the test JAR in the class path, running the jobclient JAR will fail with a 
class not found exception.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4051) ContainerKillEvent is lost when container is In New State and is recovering

2017-03-07 Thread sandflee (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4051?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sandflee updated YARN-4051:
---
Attachment: YARN-4051.06.patch

> ContainerKillEvent is lost when container is  In New State and is recovering
> 
>
> Key: YARN-4051
> URL: https://issues.apache.org/jira/browse/YARN-4051
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Reporter: sandflee
>Assignee: sandflee
>Priority: Critical
> Attachments: YARN-4051.01.patch, YARN-4051.02.patch, 
> YARN-4051.03.patch, YARN-4051.04.patch, YARN-4051.05.patch, YARN-4051.06.patch
>
>
> As in YARN-4050, NM event dispatcher is blocked, and container is in New 
> state, when we finish application, the container still alive even after NM 
> event dispatcher is unblocked.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6165) Intra-queue preemption occurs even when preemption is turned off for a specific queue.

2017-03-07 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15898975#comment-15898975
 ] 

Sunil G commented on YARN-6165:
---

Thanks [~eepayne]. I think patch looks fine for me.



> Intra-queue preemption occurs even when preemption is turned off for a 
> specific queue.
> --
>
> Key: YARN-6165
> URL: https://issues.apache.org/jira/browse/YARN-6165
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler, scheduler preemption
>Affects Versions: 3.0.0-alpha2
>Reporter: Eric Payne
>Assignee: Eric Payne
> Attachments: YARN-6165.001.patch
>
>
> Intra-queue preemption occurs even when preemption is turned on for the whole 
> cluster ({{yarn.resourcemanager.scheduler.monitor.enable == true}}) but 
> turned off for a specific queue 
> ({{yarn.scheduler.capacity.root.queue1.disable_preemption == true}}).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6280) Add a query parameter in ResourceManager Cluster Applications REST API to control whether or not returns ResourceRequest

2017-03-07 Thread Lantao Jin (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15899012#comment-15899012
 ] 

Lantao Jin commented on YARN-6280:
--

Hi [~sunilg], could you review this when you get a chance?

> Add a query parameter in ResourceManager Cluster Applications REST API to 
> control whether or not returns ResourceRequest
> 
>
> Key: YARN-6280
> URL: https://issues.apache.org/jira/browse/YARN-6280
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager, restapi
>Affects Versions: 2.7.3
>Reporter: Lantao Jin
> Attachments: YARN-6280.001.patch, YARN-6280.002.patch
>
>
> Begin from v2.7, the ResourceManager Cluster Applications REST API returns   
> ResourceRequest list. It's a very large construction in AppInfo.
> As a test, we use below URI to query only 2 results:
> http:// address:port>/ws/v1/cluster/apps?states=running,accepted=2
> The results are very different:
> ||Hadoop version|Total Character|Total Word|Total Lines|Size||
> |2.4.1|1192|  42| 42| 1.2 KB|
> |2.7.1|1222179|   48740|  48735|  1.21 MB|
> Most RESTful API requesters don't know about this after upgraded and their 
> old queries may cause ResourceManager more GC consuming and slower. Even if 
> they know this but have no idea to reduce the impact of ResourceManager 
> except slow down their query frequency.
> The patch adding a query parameter "showResourceRequests" to help requesters 
> who don't need this information to reduce the overhead. In consideration of 
> compatibility of interface, the default value is true if they don't set the 
> parameter, so the behaviour is the same as now.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6209) AM should get notified when application moved to new queue

2017-03-07 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15898930#comment-15898930
 ] 

Varun Saxena commented on YARN-6209:


I agree we can do that in another JIRA. We were initially thinking of reporting 
default label of queue and this would go hand in hand with move operation as 
well. Hence had suggested doing it here.
But upon checking further I proposed that we should report something called 
default applicable label for an app.
If we do that, it does not fall within the scope of this JIRA and as you said 
this also requires more discussion.

Whether labels should be pulled or pushed, we can discuss further on YARN-6148. 
We were thinking of pushing. If they are to be pulled we need to decide when to 
pull them.
Anyways that's not a discussion related to this JIRA. And can be discussed on 
YARN-6148.


> AM should get notified when application moved to new queue
> --
>
> Key: YARN-6209
> URL: https://issues.apache.org/jira/browse/YARN-6209
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Rohith Sharma K S
>Assignee: Naganarasimha G R
>
> As Vinod pointed out in 
> [comment|https://issues.apache.org/jira/browse/YARN-5068?focusedCommentId=15867356=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15867356],
>  when application is moved to different queue, AM should get notified about 
> the new queue. 
> YARN-1623 adds up queue information in RegisterApplicationMasterResponse.  
> The same functionality could be mirrored in AllocateResponse also when app is 
> moved to new queue. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-5301) NM mount cpu cgroups failed on some system

2017-03-07 Thread sandflee (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5301?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sandflee reassigned YARN-5301:
--

Assignee: (was: sandflee)

> NM mount cpu cgroups failed on some system
> --
>
> Key: YARN-5301
> URL: https://issues.apache.org/jira/browse/YARN-5301
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: sandflee
>
> on ubuntu  with linux kernel 3.19, , NM start failed if enable auto mount 
> cgroup. try command:
> ./bin/container-executor --mount-cgroups yarn-hadoop cpu=/cgroup/cpufail
> ./bin/container-executor --mount-cgroups yarn-hadoop cpu,cpuacct=/cgroup/cpu  
>   succ



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6295) AppLogAggregatorImpl thread leak

2017-03-07 Thread Feng Yuan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Feng Yuan updated YARN-6295:

Attachment: logAggr-leak.png

I add the NM jstack outputs,display many waiting-state AppLogAggregatorImpl 
thread.

> AppLogAggregatorImpl thread leak
> 
>
> Key: YARN-6295
> URL: https://issues.apache.org/jira/browse/YARN-6295
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: log-aggregation
>Affects Versions: 2.4.0
>Reporter: Feng Yuan
> Attachments: logAggr-leak.png
>
>
> In hadoop 2.4.0
> NM always have 100+ AppLogAggregatorImpl threads running,
> Normally,only 20+ containers will running all time,so if there is any issue 
> about the AppLogAggregator leak.
> I observe these threads is all waiting,because the logic of code is:
> While(If not appFinshed)
> Wait(1s);
> More details in comments.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6256) Add FROM_ID info key for timeline entities in reader response.

2017-03-07 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15899526#comment-15899526
 ] 

Varun Saxena commented on YARN-6256:


[~sjlee0], any further comments? If not, I can go ahead and commit this.

> Add FROM_ID info key for timeline entities in reader response. 
> ---
>
> Key: YARN-6256
> URL: https://issues.apache.org/jira/browse/YARN-6256
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>  Labels: yarn-5355-merge-blocker
> Attachments: YARN-6256-YARN-5355.0001.patch, 
> YARN-6256-YARN-5355.0002.patch, YARN-6256-YARN-5355.0003.patch
>
>
> It is continuation with YARN-6027 to add FROM_ID key in all other timeline 
> entity responses which includes
> # Flow run entity response. 
> # Application entity response
> # Generic timeline entity response - Here we need to retrospect on idprefix 
> filter which is now separately provided. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6295) AppLogAggregatorImpl thread leak

2017-03-07 Thread Feng Yuan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Feng Yuan updated YARN-6295:

Attachment: QQ截图20170307220524.png

I add the NM jstack outputs,display many waiting-state AppLogAggregatorImpl 
thread.

> AppLogAggregatorImpl thread leak
> 
>
> Key: YARN-6295
> URL: https://issues.apache.org/jira/browse/YARN-6295
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: log-aggregation
>Affects Versions: 2.4.0
>Reporter: Feng Yuan
>
> In hadoop 2.4.0
> NM always have 100+ AppLogAggregatorImpl threads running,
> Normally,only 20+ containers will running all time,so if there is any issue 
> about the AppLogAggregator leak.
> I observe these threads is all waiting,because the logic of code is:
> While(If not appFinshed)
> Wait(1s);
> More details in comments.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6295) AppLogAggregatorImpl thread leak

2017-03-07 Thread Feng Yuan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Feng Yuan updated YARN-6295:

Attachment: (was: QQ截图20170307220524.png)

> AppLogAggregatorImpl thread leak
> 
>
> Key: YARN-6295
> URL: https://issues.apache.org/jira/browse/YARN-6295
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: log-aggregation
>Affects Versions: 2.4.0
>Reporter: Feng Yuan
>
> In hadoop 2.4.0
> NM always have 100+ AppLogAggregatorImpl threads running,
> Normally,only 20+ containers will running all time,so if there is any issue 
> about the AppLogAggregator leak.
> I observe these threads is all waiting,because the logic of code is:
> While(If not appFinshed)
> Wait(1s);
> More details in comments.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (YARN-6295) AppLogAggregatorImpl thread leak

2017-03-07 Thread Feng Yuan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Feng Yuan updated YARN-6295:

Comment: was deleted

(was: I add the NM jstack outputs,display many waiting-state 
AppLogAggregatorImpl thread.)

> AppLogAggregatorImpl thread leak
> 
>
> Key: YARN-6295
> URL: https://issues.apache.org/jira/browse/YARN-6295
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: log-aggregation
>Affects Versions: 2.4.0
>Reporter: Feng Yuan
>
> In hadoop 2.4.0
> NM always have 100+ AppLogAggregatorImpl threads running,
> Normally,only 20+ containers will running all time,so if there is any issue 
> about the AppLogAggregator leak.
> I observe these threads is all waiting,because the logic of code is:
> While(If not appFinshed)
> Wait(1s);
> More details in comments.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6295) AppLogAggregatorImpl thread leak

2017-03-07 Thread Feng Yuan (JIRA)
Feng Yuan created YARN-6295:
---

 Summary: AppLogAggregatorImpl thread leak
 Key: YARN-6295
 URL: https://issues.apache.org/jira/browse/YARN-6295
 Project: Hadoop YARN
  Issue Type: Bug
  Components: log-aggregation
Affects Versions: 2.4.0
Reporter: Feng Yuan


In hadoop 2.4.0
NM always have 100+ AppLogAggregatorImpl threads running,
Normally,only 20+ containers will running all time,so if there is any issue 
about the AppLogAggregator leak.
I observe these threads is all waiting,because the logic of code is:
While(If not appFinshed)
Wait(1s);
More details in comments.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



Is there AppLogAggregatorImpl thread leak?

2017-03-07 Thread 袁枫
In hadoop 2.4.0
NM always have 100+ AppLogAggregatorImpl threads running,
Normally,only 20+ containers will running all time,so if there is any issue 
about the AppLogAggregator leak.
I observe these threads is all waiting,because the logic of code is:
While(If not appFinshed)
Wait(1s);

Like this:
[cid:image001.png@01D2978E.628DB510]


[jira] [Commented] (YARN-5956) Refactor ClientRMService

2017-03-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15899642#comment-15899642
 ] 

Hadoop QA commented on YARN-5956:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 0 new + 57 unchanged - 5 fixed = 57 total (was 62) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 39m 35s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 60m 31s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.resourcemanager.TestRMRestart |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-5956 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12856611/YARN-5956.13.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 335674f9d5d5 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / f597f4c |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/15189/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/15189/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/15189/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This 

[jira] [Updated] (YARN-5956) Refactor ClientRMService

2017-03-07 Thread Kai Sasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5956?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Sasaki updated YARN-5956:
-
Attachment: YARN-5956.13.patch

> Refactor ClientRMService
> 
>
> Key: YARN-5956
> URL: https://issues.apache.org/jira/browse/YARN-5956
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Affects Versions: 3.0.0-alpha2
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
>Priority: Minor
> Attachments: YARN-5956.01.patch, YARN-5956.02.patch, 
> YARN-5956.03.patch, YARN-5956.04.patch, YARN-5956.05.patch, 
> YARN-5956.06.patch, YARN-5956.07.patch, YARN-5956.08.patch, 
> YARN-5956.09.patch, YARN-5956.10.patch, YARN-5956.11.patch, 
> YARN-5956.12.patch, YARN-5956.13.patch
>
>
> Some refactoring can be done in {{ClientRMService}}.
> - Remove redundant variable declaration
> - Fill in missing javadocs
> - Proper variable access modifier
> - Fix some typos in method name and exception messages



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6256) Add FROM_ID info key for timeline entities in reader response.

2017-03-07 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15899702#comment-15899702
 ] 

Sangjin Lee commented on YARN-6256:
---

+1. Sorry for the late reply. Thanks [~rohithsharma]!

> Add FROM_ID info key for timeline entities in reader response. 
> ---
>
> Key: YARN-6256
> URL: https://issues.apache.org/jira/browse/YARN-6256
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>  Labels: yarn-5355-merge-blocker
> Attachments: YARN-6256-YARN-5355.0001.patch, 
> YARN-6256-YARN-5355.0002.patch, YARN-6256-YARN-5355.0003.patch
>
>
> It is continuation with YARN-6027 to add FROM_ID key in all other timeline 
> entity responses which includes
> # Flow run entity response. 
> # Application entity response
> # Generic timeline entity response - Here we need to retrospect on idprefix 
> filter which is now separately provided. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6140) start time key in NM leveldb store should be removed when container is removed

2017-03-07 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15899804#comment-15899804
 ] 

Sangjin Lee commented on YARN-6140:
---

As mentioned above, This is not reproduced in the trunk version (YARN-5355) or 
the branch-2 version (YARN-5355-branch-2). I saw this when I backported 
timeline service to our internal branch based on 2.6. The unit test fails 
because remove calls did not remove the records fully.

I'm not too sure why the same code works against > 2.6 but not for 2.6. Since 
this is not happening in the latest versions, I don't think it's a serious 
issue. But I wanted to see if we can still delete the column to be on the safe 
side. It would be good to understand why it does not work against 2.6.

> start time key in NM leveldb store should be removed when container is removed
> --
>
> Key: YARN-6140
> URL: https://issues.apache.org/jira/browse/YARN-6140
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Affects Versions: YARN-5355
>Reporter: Sangjin Lee
>Assignee: Ajith S
>
> It appears that the start time key is not removed when the container is 
> removed. The key was introduced in YARN-5792.
> I found this while backporting the YARN-5355-branch-2 branch to our internal 
> branch loosely based on 2.6.0. The {{TestNMLeveldbStateStoreService}} test 
> was failing because of this.
> I'm not sure why we didn't see this earlier.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6195) Export UsedCapacity and AbsoluteUsedCapacity to JMX

2017-03-07 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6195?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen updated YARN-6195:
-
Component/s: capacityscheduler

> Export UsedCapacity and AbsoluteUsedCapacity to JMX
> ---
>
> Key: YARN-6195
> URL: https://issues.apache.org/jira/browse/YARN-6195
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacityscheduler, metrics, yarn
>Affects Versions: 3.0.0-alpha3
>Reporter: Benson Qiu
>Assignee: Benson Qiu
> Attachments: YARN-6195.001.patch
>
>
> `usedCapacity` and `absoluteUsedCapacity` are currently not available as JMX. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6281) Cleanup when AMRMProxy fails to initialize a new interceptor chain

2017-03-07 Thread Botong Huang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6281?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Botong Huang updated YARN-6281:
---
Attachment: YARN-6281.v3.patch

Unit test added

> Cleanup when AMRMProxy fails to initialize a new interceptor chain
> --
>
> Key: YARN-6281
> URL: https://issues.apache.org/jira/browse/YARN-6281
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Botong Huang
>Assignee: Botong Huang
>Priority: Minor
> Attachments: YARN-6281.v1.patch, YARN-6281.v2.patch, 
> YARN-6281.v3.patch
>
>
> When a app starts, AMRMProxy.initializePipeline creates a new Interceptor 
> chain and add it to its pipeline mapping. Then it initializes the chain and 
> return. The problem is that when the chain initialization throws (e.g. 
> because of configuration error, interceptor class not found etc.), the chain 
> is not removed from AMRMProxy's pipeline mapping. 
> This patch also contains misc log message fixes in AMRMProxy. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6287) RMCriticalThreadUncaughtExceptionHandler.rmContext should be final

2017-03-07 Thread Corey Barker (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Corey Barker updated YARN-6287:
---
Attachment: YARN-6287.001.patch

> RMCriticalThreadUncaughtExceptionHandler.rmContext should be final
> --
>
> Key: YARN-6287
> URL: https://issues.apache.org/jira/browse/YARN-6287
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Affects Versions: 3.0.0-alpha2
>Reporter: Daniel Templeton
>Assignee: Corey Barker
>Priority: Minor
>  Labels: newbie
> Attachments: YARN-6287.001.patch
>
>
> {code}
>   private RMContext rmContext;
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4266) Allow whitelisted users to disable user re-mapping/squashing when launching docker containers

2017-03-07 Thread Eric Badger (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15899828#comment-15899828
 ] 

Eric Badger commented on YARN-4266:
---

[~tangzhankun], thanks for pointing that out. I hadn't seen that conversation. 

It seems that the major issue with using --user=UID:GID is that there is no 
username. But is there any reason that we can't just add in an environment 
variable to the docker run command that is set to the username and then run a 
usermod to change the username of the associated UID? Usernames are just 
cosmetic and everything is done via UIDs, so I don't think it makes sense to 
run the docker container based on a username. 

Something like:
{{docker run --user=2000 -e USERNAME=\*username crafted in code\*}}

And then in the container startup command (with the container running as root):
{{usermod -l $USERNAME $(getent passwd "1001" | cut -d: -f1) && su $USERNAME}}

There are probably more efficient ways to do this, but this is just a general 
idea and proof of concept. 

The main problem that I can see with this method is if there is already a user 
in the image associated with the UID of the user on the host. In that case, we 
would need to remap the UID of the user in the image to something different 
before we could do the usermod (or else we would have potential permissions 
issues inside the container). However, this would also be easy to do. 

[~sidharta-s], [~templedf], [~vvasudev], [~zyluo], you were all very active on 
YARN-5360. Do you have any thoughts on the approach above given my explanation?

> Allow whitelisted users to disable user re-mapping/squashing when launching 
> docker containers
> -
>
> Key: YARN-4266
> URL: https://issues.apache.org/jira/browse/YARN-4266
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Sidharta Seethana
>Assignee: Zhankun Tang
> Attachments: YARN-4266.001.patch, 
> YARN-4266_Allow_whitelisted_users_to_disable_user_re-mapping.pdf, 
> YARN-4266_Allow_whitelisted_users_to_disable_user_re-mapping_v2.pdf, 
> YARN-4266_Allow_whitelisted_users_to_disable_user_re-mapping_v3.pdf, 
> YARN-4266-branch-2.8.001.patch
>
>
> Docker provides a mechanism (the --user switch) that enables us to specify 
> the user the container processes should run as. We use this mechanism today 
> when launching docker containers . In non-secure mode, we run the docker 
> container based on 
> `yarn.nodemanager.linux-container-executor.nonsecure-mode.local-user` and in 
> secure mode, as the submitting user. However, this mechanism breaks down with 
> a large number of 'pre-created' images which don't necessarily have the users 
> available within the image. Examples of such images include shared images 
> that need to be used by multiple users. We need a way in which we can allow a 
> pre-defined set of users to run containers based on existing images, without 
> using the --user switch. There are some implications of disabling this user 
> squashing that we'll need to work through : log aggregation, artifact 
> deletion etc.,



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6281) Cleanup when AMRMProxy fails to initialize a new interceptor chain

2017-03-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15899837#comment-15899837
 ] 

Hadoop QA commented on YARN-6281:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 14s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:
 The patch generated 2 new + 29 unchanged - 0 fixed = 31 total (was 29) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 13m  0s{color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 31m 48s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.nodemanager.amrmproxy.TestAMRMProxyService |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-6281 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12856636/YARN-6281.v3.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 315535e8bd74 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / f597f4c |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/15190/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/15190/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/15190/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/15190/console |
| Powered by 

[jira] [Updated] (YARN-6050) AMs can't be scheduled on racks or nodes

2017-03-07 Thread Robert Kanter (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Kanter updated YARN-6050:

Attachment: YARN-6050.009.patch

I see.  Thanks for explaining the {{:0}} nodes [~leftnoteasy].  

Patch 009 updates the label manager code to keep track of the active 
{{NodeId}}'s.  This allows the {{getApplicableNodeCountForAM}} method to 
correctly determine the number of nodes.  It takes the union of all 
{{NodeId}}'s for resource requests and does an intersection with the active 
{{NodeId}}'s from the set label.  By comparing the {{NodeId}}'s and storing 
them in sets, we won't get any duplicates like in patch 008.  There's also 
new/updated tests.

[~leftnoteasy], if you think it would be better to use not change the label 
manager code and instead filter out the {{:0}} nodes, I can also do that 
instead.  

> AMs can't be scheduled on racks or nodes
> 
>
> Key: YARN-6050
> URL: https://issues.apache.org/jira/browse/YARN-6050
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.9.0, 3.0.0-alpha2
>Reporter: Robert Kanter
>Assignee: Robert Kanter
> Attachments: YARN-6050.001.patch, YARN-6050.002.patch, 
> YARN-6050.003.patch, YARN-6050.004.patch, YARN-6050.005.patch, 
> YARN-6050.006.patch, YARN-6050.007.patch, YARN-6050.008.patch, 
> YARN-6050.009.patch
>
>
> Yarn itself supports rack/node aware scheduling for AMs; however, there 
> currently are two problems:
> # To specify hard or soft rack/node requests, you have to specify more than 
> one {{ResourceRequest}}.  For example, if you want to schedule an AM only on 
> "rackA", you have to create two {{ResourceRequest}}, like this:
> {code}
> ResourceRequest.newInstance(PRIORITY, ANY, CAPABILITY, NUM_CONTAINERS, false);
> ResourceRequest.newInstance(PRIORITY, "rackA", CAPABILITY, NUM_CONTAINERS, 
> true);
> {code}
> The problem is that the Yarn API doesn't actually allow you to specify more 
> than one {{ResourceRequest}} in the {{ApplicationSubmissionContext}}.  The 
> current behavior is to either build one from {{getResource}} or directly from 
> {{getAMContainerResourceRequest}}, depending on if 
> {{getAMContainerResourceRequest}} is null or not.  We'll need to add a third 
> method, say {{getAMContainerResourceRequests}}, which takes a list of 
> {{ResourceRequest}} so that clients can specify the multiple resource 
> requests.
> # There are some places where things are hardcoded to overwrite what the 
> client specifies.  These are pretty straightforward to fix.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6297) TestAppLogAggregatorImp.verifyFilesUploaded() should check # of filed uploaded with that of files expected

2017-03-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15900370#comment-15900370
 ] 

Hadoop QA commented on YARN-6297:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
26s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 14m 
31s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 36m 44s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-6297 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12856692/YARN-6297.01.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 095947b0ad54 3.13.0-108-generic #155-Ubuntu SMP Wed Jan 11 
16:58:52 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 1598fd3 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/15198/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/15198/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> TestAppLogAggregatorImp.verifyFilesUploaded() should check # of filed 
> uploaded with that of files expected
> --
>
> Key: YARN-6297
> URL: https://issues.apache.org/jira/browse/YARN-6297
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>  Labels: test
> Attachments: YARN-6297.01.patch
>
>

[jira] [Commented] (YARN-5948) Implement MutableConfigurationManager for handling storage into configuration store

2017-03-07 Thread Xuan Gong (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15900373#comment-15900373
 ] 

Xuan Gong commented on YARN-5948:
-

Thanks for the patch. [~jonathan_huang]

Overall looks good.

One nit:

instead of using:
{code}
public static final String SCHEDULER_CONFIGURATION_STORE = YARN_PREFIX +
 "scheduler.configuration.store";
public static final String MEMORY_CONFIGURATION_STORE = "memory";
{code}

Could you using the configuration, such as  SCHEDULER_CONFIGURATION_STORE_CLASS 
? And create a SCHEDULER_CONFIGURATION_STORE factory to load it.

This is the example on how we load the RMStateStore class.
{code}
  public static RMStateStore getStore(Configuration conf) {
Class storeClass =
conf.getClass(YarnConfiguration.RM_STORE,
MemoryRMStateStore.class, RMStateStore.class);
LOG.info("Using RMStateStore implementation - " + storeClass);
return ReflectionUtils.newInstance(storeClass, conf);
  }
{code}

> Implement MutableConfigurationManager for handling storage into configuration 
> store
> ---
>
> Key: YARN-5948
> URL: https://issues.apache.org/jira/browse/YARN-5948
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
> Attachments: YARN-5948.001.patch, YARN-5948-YARN-5734.002.patch, 
> YARN-5948-YARN-5734.003.patch, YARN-5948-YARN-5734.004.patch, 
> YARN-5948-YARN-5734.005.patch
>
>
> The MutableConfigurationManager will take REST calls with desired client 
> configuration changes and call YarnConfigurationStore methods to store these 
> changes in the backing store.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4051) ContainerKillEvent is lost when container is In New State and is recovering

2017-03-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15900624#comment-15900624
 ] 

Hadoop QA commented on YARN-4051:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 17s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:
 The patch generated 2 new + 140 unchanged - 1 fixed = 142 total (was 141) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 13m  
2s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 32m 19s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-4051 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12856722/YARN-4051.06.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux fde589693e14 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 28daaf0 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/15201/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/15201/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/15201/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> ContainerKillEvent is lost when container is  In New State and is recovering
> 
>
> Key: 

[jira] [Updated] (YARN-5948) Implement MutableConfigurationManager for handling storage into configuration store

2017-03-07 Thread Jonathan Hung (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5948?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hung updated YARN-5948:

Attachment: YARN-5948-YARN-5734.006.patch

> Implement MutableConfigurationManager for handling storage into configuration 
> store
> ---
>
> Key: YARN-5948
> URL: https://issues.apache.org/jira/browse/YARN-5948
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
> Attachments: YARN-5948.001.patch, YARN-5948-YARN-5734.002.patch, 
> YARN-5948-YARN-5734.003.patch, YARN-5948-YARN-5734.004.patch, 
> YARN-5948-YARN-5734.005.patch, YARN-5948-YARN-5734.006.patch
>
>
> The MutableConfigurationManager will take REST calls with desired client 
> configuration changes and call YarnConfigurationStore methods to store these 
> changes in the backing store.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5948) Implement MutableConfigurationManager for handling storage into configuration store

2017-03-07 Thread Jonathan Hung (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15900626#comment-15900626
 ] 

Jonathan Hung commented on YARN-5948:
-

Thanks for the review, [~xgong], uploaded another patch addressing this.

> Implement MutableConfigurationManager for handling storage into configuration 
> store
> ---
>
> Key: YARN-5948
> URL: https://issues.apache.org/jira/browse/YARN-5948
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
> Attachments: YARN-5948.001.patch, YARN-5948-YARN-5734.002.patch, 
> YARN-5948-YARN-5734.003.patch, YARN-5948-YARN-5734.004.patch, 
> YARN-5948-YARN-5734.005.patch, YARN-5948-YARN-5734.006.patch
>
>
> The MutableConfigurationManager will take REST calls with desired client 
> configuration changes and call YarnConfigurationStore methods to store these 
> changes in the backing store.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6050) AMs can't be scheduled on racks or nodes

2017-03-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15900441#comment-15900441
 ] 

Hadoop QA commented on YARN-6050:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 17 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m  
3s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
 6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m  
7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  2m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
49s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 11m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
23s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 38s{color} | {color:orange} root: The patch generated 9 new + 1924 unchanged 
- 5 fixed = 1933 total (was 1929) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  2m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  7m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
37s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
40s{color} | {color:green} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager
 generated 0 new + 895 unchanged - 2 fixed = 895 total (was 897) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
31s{color} | {color:green} hadoop-mapreduce-client-jobclient in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
31s{color} | {color:green} hadoop-sls in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
45s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 40s{color} 
| {color:red} hadoop-yarn-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 39s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 31s{color} 
| {color:red} hadoop-mapreduce-client-jobclient in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 32s{color} 
| {color:red} hadoop-sls in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 

[jira] [Commented] (YARN-6289) Fail to achieve data locality when runing MapReduce and Spark on HDFS

2017-03-07 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15900467#comment-15900467
 ] 

Wangda Tan commented on YARN-6289:
--

[~Huangkx6810],

For locality scheduling scheduling, there're typically two causes:

1) FileSystem/Application should support locality. For example, FileInputFormat 
in MR uses FileSystem.getBlockLocations to get where blocks located.

2) Misconfiguration of topology script makes wrong rack name returned for given 
hosts. 
https://hadoop.apache.org/docs/r2.7.2/hadoop-project-dist/hadoop-common/RackAwareness.html
 

In addition to that.

3) There's a fix to handle too long delay to wait locality in 
CapacityScheduler: YARN-4287, but this will not handle #1/#2.

> Fail to achieve data locality when runing MapReduce and Spark on HDFS
> -
>
> Key: YARN-6289
> URL: https://issues.apache.org/jira/browse/YARN-6289
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler
> Environment: Hardware configuration
> CPU: 2 x Intel(R) Xeon(R) E5-2620 v2 @ 2.10GHz /15M Cache 6-Core 12-Thread 
> Memory: 128GB Memory (16x8GB) 1600MHz
> Disk: 600GBx2 3.5-inch with RAID-1
> Network bandwidth: 968Mb/s
> Software configuration
> Spark-1.6.2   Hadoop-2.7.1 
>Reporter: Huangkaixuan
> Attachments: Hadoop_Spark_Conf.zip, YARN-DataLocality.docx
>
>
> When running a simple wordcount experiment on YARN, I noticed that the task 
> failed to achieve data locality, even though there is no other job running on 
> the cluster at the same time. The experiment was done in a 7-node (1 master, 
> 6 data nodes/node managers) cluster and the input of the wordcount job (both 
> Spark and MapReduce) is a single-block file in HDFS which is two-way 
> replicated (replication factor = 2). I ran wordcount on YARN for 10 times. 
> The results show that only 30% of tasks can achieve data locality, which 
> seems like the result of a random placement of tasks. The experiment details 
> are in the attachment, and feel free to reproduce the experiments.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6050) AMs can't be scheduled on racks or nodes

2017-03-07 Thread Robert Kanter (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15900488#comment-15900488
 ] 

Robert Kanter commented on YARN-6050:
-

I'm not sure why it wasn't able to compile those modules; it's not a problem 
for me locally.  I've kicked off another Jenkins run.

> AMs can't be scheduled on racks or nodes
> 
>
> Key: YARN-6050
> URL: https://issues.apache.org/jira/browse/YARN-6050
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.9.0, 3.0.0-alpha2
>Reporter: Robert Kanter
>Assignee: Robert Kanter
> Attachments: YARN-6050.001.patch, YARN-6050.002.patch, 
> YARN-6050.003.patch, YARN-6050.004.patch, YARN-6050.005.patch, 
> YARN-6050.006.patch, YARN-6050.007.patch, YARN-6050.008.patch, 
> YARN-6050.009.patch, YARN-6050.010.patch
>
>
> Yarn itself supports rack/node aware scheduling for AMs; however, there 
> currently are two problems:
> # To specify hard or soft rack/node requests, you have to specify more than 
> one {{ResourceRequest}}.  For example, if you want to schedule an AM only on 
> "rackA", you have to create two {{ResourceRequest}}, like this:
> {code}
> ResourceRequest.newInstance(PRIORITY, ANY, CAPABILITY, NUM_CONTAINERS, false);
> ResourceRequest.newInstance(PRIORITY, "rackA", CAPABILITY, NUM_CONTAINERS, 
> true);
> {code}
> The problem is that the Yarn API doesn't actually allow you to specify more 
> than one {{ResourceRequest}} in the {{ApplicationSubmissionContext}}.  The 
> current behavior is to either build one from {{getResource}} or directly from 
> {{getAMContainerResourceRequest}}, depending on if 
> {{getAMContainerResourceRequest}} is null or not.  We'll need to add a third 
> method, say {{getAMContainerResourceRequests}}, which takes a list of 
> {{ResourceRequest}} so that clients can specify the multiple resource 
> requests.
> # There are some places where things are hardcoded to overwrite what the 
> client specifies.  These are pretty straightforward to fix.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6275) Fail to show real-time tracking charts in SLS

2017-03-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15900291#comment-15900291
 ] 

Hudson commented on YARN-6275:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11367 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/11367/])
YARN-6275. Fail to show real-time tracking charts in SLS (yufeigu via (rkanter: 
rev 1598fd3b7948b3592775e3be3227c4a336122bc9)
* (edit) 
hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/web/SLSWebApp.java
* (edit) hadoop-tools/hadoop-sls/src/main/bin/slsrun.sh


> Fail to show real-time tracking charts in SLS
> -
>
> Key: YARN-6275
> URL: https://issues.apache.org/jira/browse/YARN-6275
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler-load-simulator
>Affects Versions: 3.0.0-alpha2
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Fix For: 3.0.0-alpha3
>
> Attachments: YARN-6275.001.patch, YARN-6275.002.patch, 
> YARN-6275.003.patch, YARN-6275.004.patch
>
>
> # Not put {{html}} directory under the current working directory.
> # There is a bug in Class {{SLSWebApp}}, here is the stack trace:
> {code}
> java.lang.NullPointerException
>   at 
> org.eclipse.jetty.server.handler.ResourceHandler.handle(ResourceHandler.java:499)
>   at org.apache.hadoop.yarn.sls.web.SLSWebApp$1.handle(SLSWebApp.java:152)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
>   at org.eclipse.jetty.server.Server.handle(Server.java:524)
>   at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:319)
>   at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:253)
>   at 
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
>   at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
>   at 
> org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
>   at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
>   at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
>   at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)
>   at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6297) TestAppLogAggregatorImp.verifyFilesUploaded() should check # of filed uploaded with that of files expected

2017-03-07 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15900299#comment-15900299
 ] 

Haibo Chen commented on YARN-6297:
--

[~rkanter] Can you help review it please?

> TestAppLogAggregatorImp.verifyFilesUploaded() should check # of filed 
> uploaded with that of files expected
> --
>
> Key: YARN-6297
> URL: https://issues.apache.org/jira/browse/YARN-6297
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>  Labels: test
> Attachments: YARN-6297.01.patch
>
>
> Per YARN-6252
> {code:java}
>   private static void verifyFilesUploaded(Set filesUploaded,
>   Set filesExpected) {
> final String errMsgPrefix = "The set of files uploaded are not the same " 
> +
> "as expected";
> if(filesUploaded.size() != filesUploaded.size()) {
>   fail(errMsgPrefix + ": actual size: " + filesUploaded.size() + " vs " +
>   "expected size: " + filesExpected.size());
> }
> for(String file: filesExpected) {
>   if(!filesUploaded.contains(file)) {
> fail(errMsgPrefix + ": expecting " + file);
>   }
> }
>   }
> {code}
> should check the number of files uploaded against the number of files 
> expected.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6050) AMs can't be scheduled on racks or nodes

2017-03-07 Thread Robert Kanter (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Kanter updated YARN-6050:

Attachment: YARN-6050.010.patch

The 010 patch is very similar to the 009 patch, but fixes the relevant 
checkstyle and javadoc issues.

> AMs can't be scheduled on racks or nodes
> 
>
> Key: YARN-6050
> URL: https://issues.apache.org/jira/browse/YARN-6050
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.9.0, 3.0.0-alpha2
>Reporter: Robert Kanter
>Assignee: Robert Kanter
> Attachments: YARN-6050.001.patch, YARN-6050.002.patch, 
> YARN-6050.003.patch, YARN-6050.004.patch, YARN-6050.005.patch, 
> YARN-6050.006.patch, YARN-6050.007.patch, YARN-6050.008.patch, 
> YARN-6050.009.patch, YARN-6050.010.patch
>
>
> Yarn itself supports rack/node aware scheduling for AMs; however, there 
> currently are two problems:
> # To specify hard or soft rack/node requests, you have to specify more than 
> one {{ResourceRequest}}.  For example, if you want to schedule an AM only on 
> "rackA", you have to create two {{ResourceRequest}}, like this:
> {code}
> ResourceRequest.newInstance(PRIORITY, ANY, CAPABILITY, NUM_CONTAINERS, false);
> ResourceRequest.newInstance(PRIORITY, "rackA", CAPABILITY, NUM_CONTAINERS, 
> true);
> {code}
> The problem is that the Yarn API doesn't actually allow you to specify more 
> than one {{ResourceRequest}} in the {{ApplicationSubmissionContext}}.  The 
> current behavior is to either build one from {{getResource}} or directly from 
> {{getAMContainerResourceRequest}}, depending on if 
> {{getAMContainerResourceRequest}} is null or not.  We'll need to add a third 
> method, say {{getAMContainerResourceRequests}}, which takes a list of 
> {{ResourceRequest}} so that clients can specify the multiple resource 
> requests.
> # There are some places where things are hardcoded to overwrite what the 
> client specifies.  These are pretty straightforward to fix.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4051) ContainerKillEvent is lost when container is In New State and is recovering

2017-03-07 Thread sandflee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15900345#comment-15900345
 ] 

sandflee commented on YARN-4051:


since RM will resend FINISH_APPS/FINISH_CONTAINER if nm reports app/container 
running, seems safe to drop the event if container is recovering, [~jlowe]

> ContainerKillEvent is lost when container is  In New State and is recovering
> 
>
> Key: YARN-4051
> URL: https://issues.apache.org/jira/browse/YARN-4051
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Reporter: sandflee
>Assignee: sandflee
>Priority: Critical
> Attachments: YARN-4051.01.patch, YARN-4051.02.patch, 
> YARN-4051.03.patch, YARN-4051.04.patch, YARN-4051.05.patch, YARN-4051.06.patch
>
>
> As in YARN-4050, NM event dispatcher is blocked, and container is in New 
> state, when we finish application, the container still alive even after NM 
> event dispatcher is unblocked.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4236) Metric for aggregated resources allocation per queue

2017-03-07 Thread Eric Badger (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15900408#comment-15900408
 ] 

Eric Badger commented on YARN-4236:
---

Hi, [~lichangleo]. The current patch has gone stale. Are you interested in 
rebasing it to trunk? If not, I can do the rebase

> Metric for aggregated resources allocation per queue
> 
>
> Key: YARN-4236
> URL: https://issues.apache.org/jira/browse/YARN-4236
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: metrics, scheduler
>Reporter: Chang Li
>Assignee: Chang Li
>  Labels: oct16-medium
> Attachments: YARN-4236.2.patch, YARN-4236.patch
>
>
> We currently track allocated memory and allocated vcores per queue but we 
> don't have a good rate metric on how fast we're allocating these things. In 
> other words, a straight line in allocatedmb could equally be one extreme of 
> no new containers are being allocated or allocating a bunch of containers 
> where we free exactly what we allocate each time. Adding a resources 
> allocated per second per queue would give us a better insight into the rate 
> of resource churn on a queue. Based on this aggregated resource allocation 
> per queue we can easily have some tools to measure the rate of resource 
> allocation per queue.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6297) TestAppLogAggregatorImp.verifyFilesUploaded() should check # of filed uploaded with that of files expected

2017-03-07 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6297?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen updated YARN-6297:
-
Attachment: YARN-6297.01.patch

Upload a patch to address the issue and fix the newly uncovered bugs in test 
code.

> TestAppLogAggregatorImp.verifyFilesUploaded() should check # of filed 
> uploaded with that of files expected
> --
>
> Key: YARN-6297
> URL: https://issues.apache.org/jira/browse/YARN-6297
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>  Labels: test
> Attachments: YARN-6297.01.patch
>
>
> Per YARN-6252
> {code:java}
>   private static void verifyFilesUploaded(Set filesUploaded,
>   Set filesExpected) {
> final String errMsgPrefix = "The set of files uploaded are not the same " 
> +
> "as expected";
> if(filesUploaded.size() != filesUploaded.size()) {
>   fail(errMsgPrefix + ": actual size: " + filesUploaded.size() + " vs " +
>   "expected size: " + filesExpected.size());
> }
> for(String file: filesExpected) {
>   if(!filesUploaded.contains(file)) {
> fail(errMsgPrefix + ": expecting " + file);
>   }
> }
>   }
> {code}
> should check the number of files uploaded against the number of files 
> expected.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6299) FairSharePolicy is incorrect when demand is less than min share

2017-03-07 Thread Daniel Templeton (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton updated YARN-6299:
---
Summary: FairSharePolicy is incorrect when demand is less than min share  
(was: FairSharePolicy is off when demand is less than min share)

> FairSharePolicy is incorrect when demand is less than min share
> ---
>
> Key: YARN-6299
> URL: https://issues.apache.org/jira/browse/YARN-6299
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 3.0.0-alpha2
>Reporter: Daniel Templeton
>
> {code}
>   Resource resourceUsage1 = s1.getResourceUsage();
>   Resource resourceUsage2 = s2.getResourceUsage();
>   Resource minShare1 = Resources.min(RESOURCE_CALCULATOR, null,
>   s1.getMinShare(), s1.getDemand());
>   Resource minShare2 = Resources.min(RESOURCE_CALCULATOR, null,
>   s2.getMinShare(), s2.getDemand());
>   boolean s1Needy = Resources.lessThan(RESOURCE_CALCULATOR, null,
>   resourceUsage1, minShare1);
>   boolean s2Needy = Resources.lessThan(RESOURCE_CALCULATOR, null,
>   resourceUsage2, minShare2);
>   minShareRatio1 = (double) resourceUsage1.getMemorySize()
>   / Resources.max(RESOURCE_CALCULATOR, null, minShare1, 
> ONE).getMemorySize();
>   minShareRatio2 = (double) resourceUsage2.getMemorySize()
>   / Resources.max(RESOURCE_CALCULATOR, null, minShare2, 
> ONE).getMemorySize();
> {code}
> If demand is less than min share, then an app will be flagged as needy if it 
> has demand that is higher than its usage, which happens any time the app has 
> been assigned resources that it hasn't started using yet.  That sounds wrong 
> to me.  [~kasha], [~yufeigu]?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4236) Metric for aggregated resources allocation per queue

2017-03-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15900413#comment-15900413
 ] 

Hadoop QA commented on YARN-4236:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  7s{color} 
| {color:red} YARN-4236 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-4236 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12770399/YARN-4236.2.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/15199/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Metric for aggregated resources allocation per queue
> 
>
> Key: YARN-4236
> URL: https://issues.apache.org/jira/browse/YARN-4236
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: metrics, scheduler
>Reporter: Chang Li
>Assignee: Chang Li
>  Labels: oct16-medium
> Attachments: YARN-4236.2.patch, YARN-4236.patch
>
>
> We currently track allocated memory and allocated vcores per queue but we 
> don't have a good rate metric on how fast we're allocating these things. In 
> other words, a straight line in allocatedmb could equally be one extreme of 
> no new containers are being allocated or allocating a bunch of containers 
> where we free exactly what we allocate each time. Adding a resources 
> allocated per second per queue would give us a better insight into the rate 
> of resource churn on a queue. Based on this aggregated resource allocation 
> per queue we can easily have some tools to measure the rate of resource 
> allocation per queue.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5881) Enable configuration of queue capacity in terms of absolute resources

2017-03-07 Thread Tao Jie (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15900575#comment-15900575
 ] 

Tao Jie commented on YARN-5881:
---

Thank you [~leftnoteasy], it seems that the queue-resource configuration would 
be similar to FairScheduler with this feature. Is it possible that with the 
same configuration file, we can choose either FS or CS for scheduling? 

> Enable configuration of queue capacity in terms of absolute resources
> -
>
> Key: YARN-5881
> URL: https://issues.apache.org/jira/browse/YARN-5881
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Sean Po
>Assignee: Wangda Tan
> Attachments: 
> YARN-5881.Support.Absolute.Min.Max.Resource.In.Capacity.Scheduler.design-doc.v1.pdf
>
>
> Currently, Yarn RM supports the configuration of queue capacity in terms of a 
> proportion to cluster capacity. In the context of Yarn being used as a public 
> cloud service, it makes more sense if queues can be configured absolutely. 
> This will allow administrators to set usage limits more concretely and 
> simplify customer expectations for cluster allocation.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4051) ContainerKillEvent is lost when container is In New State and is recovering

2017-03-07 Thread sandflee (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4051?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sandflee updated YARN-4051:
---
Attachment: (was: YARN-4051.06.patch)

> ContainerKillEvent is lost when container is  In New State and is recovering
> 
>
> Key: YARN-4051
> URL: https://issues.apache.org/jira/browse/YARN-4051
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Reporter: sandflee
>Assignee: sandflee
>Priority: Critical
> Attachments: YARN-4051.01.patch, YARN-4051.02.patch, 
> YARN-4051.03.patch, YARN-4051.04.patch, YARN-4051.05.patch
>
>
> As in YARN-4050, NM event dispatcher is blocked, and container is in New 
> state, when we finish application, the container still alive even after NM 
> event dispatcher is unblocked.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4051) ContainerKillEvent is lost when container is In New State and is recovering

2017-03-07 Thread sandflee (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4051?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sandflee updated YARN-4051:
---
Attachment: YARN-4051.06.patch

> ContainerKillEvent is lost when container is  In New State and is recovering
> 
>
> Key: YARN-4051
> URL: https://issues.apache.org/jira/browse/YARN-4051
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Reporter: sandflee
>Assignee: sandflee
>Priority: Critical
> Attachments: YARN-4051.01.patch, YARN-4051.02.patch, 
> YARN-4051.03.patch, YARN-4051.04.patch, YARN-4051.05.patch, YARN-4051.06.patch
>
>
> As in YARN-4050, NM event dispatcher is blocked, and container is in New 
> state, when we finish application, the container still alive even after NM 
> event dispatcher is unblocked.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6289) Fail to achieve data locality when runing MapReduce and Spark on HDFS

2017-03-07 Thread Huangkaixuan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6289?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Huangkaixuan updated YARN-6289:
---
Component/s: (was: capacity scheduler)
 distributed-scheduling

> Fail to achieve data locality when runing MapReduce and Spark on HDFS
> -
>
> Key: YARN-6289
> URL: https://issues.apache.org/jira/browse/YARN-6289
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: distributed-scheduling
> Environment: Hardware configuration
> CPU: 2 x Intel(R) Xeon(R) E5-2620 v2 @ 2.10GHz /15M Cache 6-Core 12-Thread 
> Memory: 128GB Memory (16x8GB) 1600MHz
> Disk: 600GBx2 3.5-inch with RAID-1
> Network bandwidth: 968Mb/s
> Software configuration
> Spark-1.6.2   Hadoop-2.7.1 
>Reporter: Huangkaixuan
> Attachments: Hadoop_Spark_Conf.zip, YARN-DataLocality.docx
>
>
> When running a simple wordcount experiment on YARN, I noticed that the task 
> failed to achieve data locality, even though there is no other job running on 
> the cluster at the same time. The experiment was done in a 7-node (1 master, 
> 6 data nodes/node managers) cluster and the input of the wordcount job (both 
> Spark and MapReduce) is a single-block file in HDFS which is two-way 
> replicated (replication factor = 2). I ran wordcount on YARN for 10 times. 
> The results show that only 30% of tasks can achieve data locality, which 
> seems like the result of a random placement of tasks. The experiment details 
> are in the attachment, and feel free to reproduce the experiments.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6164) Expose maximum-am-resource-percent in YarnClient

2017-03-07 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15900609#comment-15900609
 ] 

Sunil G commented on YARN-6164:
---

Hi [~benson.qiu]

I had offline discussed with [~leftnoteasy] yesterday and we feel that we can 
have a class named {{QueueCapacities}} in {{QueueInfo}}. This could have fields 
like cap/max-cap/max-am-perc  (all are per label based) etc. 
Could you please share your thoughts also?

> Expose maximum-am-resource-percent in YarnClient
> 
>
> Key: YARN-6164
> URL: https://issues.apache.org/jira/browse/YARN-6164
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.7.2
>Reporter: Benson Qiu
>Assignee: Benson Qiu
> Attachments: YARN-6164.001.patch, YARN-6164.002.patch, 
> YARN-6164.003.patch, YARN-6164.004.patch, YARN-6164.005.patch
>
>
> `yarn.scheduler.capacity.maximum-am-resource-percent` is exposed through the 
> [Cluster Scheduler 
> API|http://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/ResourceManagerRest.html#Cluster_Scheduler_API],
>  but not through 
> [YarnClient|https://hadoop.apache.org/docs/current/api/org/apache/hadoop/yarn/client/api/YarnClient.html].
> Since YarnClient and RM REST APIs depend on different ports (8032 vs 8088 by 
> default), it would be nice to expose `maximum-am-resource-percent` in 
> YarnClient as well. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4236) Metric for aggregated resources allocation per queue

2017-03-07 Thread Chang Li (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15900663#comment-15900663
 ] 

Chang Li commented on YARN-4236:


Hey [~ebadger], I am interested in updating this patch, but I probably need to 
wait till weekend to work on this. Hope that's ok

> Metric for aggregated resources allocation per queue
> 
>
> Key: YARN-4236
> URL: https://issues.apache.org/jira/browse/YARN-4236
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: metrics, scheduler
>Reporter: Chang Li
>Assignee: Chang Li
>  Labels: oct16-medium
> Attachments: YARN-4236.2.patch, YARN-4236.patch
>
>
> We currently track allocated memory and allocated vcores per queue but we 
> don't have a good rate metric on how fast we're allocating these things. In 
> other words, a straight line in allocatedmb could equally be one extreme of 
> no new containers are being allocated or allocating a bunch of containers 
> where we free exactly what we allocate each time. Adding a resources 
> allocated per second per queue would give us a better insight into the rate 
> of resource churn on a queue. Based on this aggregated resource allocation 
> per queue we can easily have some tools to measure the rate of resource 
> allocation per queue.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6050) AMs can't be scheduled on racks or nodes

2017-03-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15900702#comment-15900702
 ] 

Hadoop QA commented on YARN-6050:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 17 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m  
2s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m  
6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  2m 
 4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
40s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 10m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m  
2s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 29s{color} | {color:orange} root: The patch generated 9 new + 1928 unchanged 
- 5 fixed = 1937 total (was 1933) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  2m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
34s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
37s{color} | {color:green} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager
 generated 0 new + 895 unchanged - 2 fixed = 895 total (was 897) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
29s{color} | {color:green} hadoop-mapreduce-client-jobclient in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
30s{color} | {color:green} hadoop-sls in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
43s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
46s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 39m 46s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}101m 
25s{color} | {color:green} hadoop-mapreduce-client-jobclient in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
20s{color} | {color:green} hadoop-sls in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | 

[jira] [Commented] (YARN-5948) Implement MutableConfigurationManager for handling storage into configuration store

2017-03-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15900711#comment-15900711
 ] 

Hadoop QA commented on YARN-5948:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
47s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
53s{color} | {color:green} YARN-5734 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
53s{color} | {color:green} YARN-5734 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 0s{color} | {color:green} YARN-5734 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
3s{color} | {color:green} YARN-5734 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
15s{color} | {color:green} YARN-5734 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
31s{color} | {color:green} YARN-5734 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
40s{color} | {color:green} YARN-5734 passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
22s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 59s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 3 new + 327 unchanged - 0 fixed = 330 total (was 327) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
 5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
37s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
38s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 40m 36s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
44s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 99m 39s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.resourcemanager.TestRMRestart |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-5948 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12856729/YARN-5948-YARN-5734.006.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux c77e64576350 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | YARN-5734 / 01ea2f3 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| checkstyle | 

[jira] [Updated] (YARN-4236) Metric for aggregated resources allocation per queue

2017-03-07 Thread Chang Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4236?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chang Li updated YARN-4236:
---
Attachment: YARN-4236-3.patch

updated patch :)

> Metric for aggregated resources allocation per queue
> 
>
> Key: YARN-4236
> URL: https://issues.apache.org/jira/browse/YARN-4236
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: metrics, scheduler
>Reporter: Chang Li
>Assignee: Chang Li
>  Labels: oct16-medium
> Attachments: YARN-4236.2.patch, YARN-4236-3.patch, YARN-4236.patch
>
>
> We currently track allocated memory and allocated vcores per queue but we 
> don't have a good rate metric on how fast we're allocating these things. In 
> other words, a straight line in allocatedmb could equally be one extreme of 
> no new containers are being allocated or allocating a bunch of containers 
> where we free exactly what we allocate each time. Adding a resources 
> allocated per second per queue would give us a better insight into the rate 
> of resource churn on a queue. Based on this aggregated resource allocation 
> per queue we can easily have some tools to measure the rate of resource 
> allocation per queue.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6304) Skip rm.transitionToActive call to RM if RM is already active.

2017-03-07 Thread Rohith Sharma K S (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith Sharma K S updated YARN-6304:

Attachment: YARN-6304.0001.patch

> Skip rm.transitionToActive call to RM if RM is already active. 
> ---
>
> Key: YARN-6304
> URL: https://issues.apache.org/jira/browse/YARN-6304
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Reporter: Rohith Sharma K S
> Attachments: YARN-6304.0001.patch
>
>
> When elector elects RM to become active, even though RM is already in ACTIVE 
> state AdminService does refresh on following configurations. 
> #. refreshAdminAcls 
> # refreshAll to update the configurations.
> I think we can skip refreshing configurations on ACTIVE RM. However admin 
> executes refresh command separately which indicates him failure if any.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6304) Skip rm.transitionToActive call to RM if RM is already active.

2017-03-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15900830#comment-15900830
 ] 

Hadoop QA commented on YARN-6304:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 39m 11s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 60m 41s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.resourcemanager.TestRMRestart |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-6304 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12856751/YARN-6304.0001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 41d35b0b2d8d 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 
09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 28daaf0 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/15205/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/15205/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/15205/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Skip rm.transitionToActive call to RM if RM is already active. 
> 

[jira] [Created] (YARN-6304) Skip rm.transitionToActive call to RM if RM is already active.

2017-03-07 Thread Rohith Sharma K S (JIRA)
Rohith Sharma K S created YARN-6304:
---

 Summary: Skip rm.transitionToActive call to RM if RM is already 
active. 
 Key: YARN-6304
 URL: https://issues.apache.org/jira/browse/YARN-6304
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager
Reporter: Rohith Sharma K S


When elector elects RM to become active, even though RM is already in ACTIVE 
state AdminService does refresh on following configurations. 
#. refreshAdminAcls 
# refreshAll to update the configurations.

I think we can skip refreshing configurations on ACTIVE RM. However admin 
executes refresh command separately which indicates him failure if any.




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4236) Metric for aggregated resources allocation per queue

2017-03-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15900816#comment-15900816
 ] 

Hadoop QA commented on YARN-4236:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 24s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 25 new + 114 unchanged - 21 fixed = 139 total (was 135) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 40m  
6s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 60m 39s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-4236 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12856748/YARN-4236-3.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 19058713538f 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 
09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 28daaf0 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/15204/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/15204/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/15204/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Metric for aggregated resources allocation per queue
> 
>
> Key: YARN-4236
> 

[jira] [Commented] (YARN-6223) [Umbrella] Natively support GPU configuration/discovery/scheduling/isolation on YARN

2017-03-07 Thread Ying Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15900877#comment-15900877
 ] 

Ying Zhang commented on YARN-6223:
--

Hi [~wangda], we are interested in this and would like to contribute. Please 
let us know how we can evolve.:-)

> [Umbrella] Natively support GPU configuration/discovery/scheduling/isolation 
> on YARN
> 
>
> Key: YARN-6223
> URL: https://issues.apache.org/jira/browse/YARN-6223
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>
> As varieties of workloads are moving to YARN, including machine learning / 
> deep learning which can speed up by leveraging GPU computation power. 
> Workloads should be able to request GPU from YARN as simple as CPU and memory.
> *To make a complete GPU story, we should support following pieces:*
> 1) GPU discovery/configuration: Admin can either config GPU resources and 
> architectures on each node, or more advanced, NodeManager can automatically 
> discover GPU resources and architectures and report to ResourceManager 
> 2) GPU scheduling: YARN scheduler should account GPU as a resource type just 
> like CPU and memory.
> 3) GPU isolation/monitoring: once launch a task with GPU resources, 
> NodeManager should properly isolate and monitor task's resource usage.
> For #2, YARN-3926 can support it natively. For #3, YARN-3611 has introduced 
> an extensible framework to support isolation for different resource types and 
> different runtimes.
> *Related JIRAs:*
> There're a couple of JIRAs (YARN-4122/YARN-5517) filed with similar goals but 
> different solutions:
> For scheduling:
> - YARN-4122/YARN-5517 are all adding a new GPU resource type to Resource 
> protocol instead of leveraging YARN-3926.
> For isolation:
> - And YARN-4122 proposed to use CGroups to do isolation which cannot solve 
> the problem listed at 
> https://github.com/NVIDIA/nvidia-docker/wiki/GPU-isolation#challenges such as 
> minor device number mapping; load nvidia_uvm module; mismatch of CUDA/driver 
> versions, etc.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-6304) Skip rm.transitionToActive call to RM if RM is already active.

2017-03-07 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15900794#comment-15900794
 ] 

Rohith Sharma K S edited comment on YARN-6304 at 3/8/17 6:46 AM:
-

cc :-/ [~kasha] [~xgong] [~jianhe] do you see any issue in returning from 
AdminService?


was (Author: rohithsharma):
cc :-/ [~kasha] [~xgong] [~jianhe] do you see any issue with it?

> Skip rm.transitionToActive call to RM if RM is already active. 
> ---
>
> Key: YARN-6304
> URL: https://issues.apache.org/jira/browse/YARN-6304
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
> Attachments: YARN-6304.0001.patch
>
>
> When elector elects RM to become active, even though RM is already in ACTIVE 
> state AdminService does refresh on following configurations. 
> #. refreshAdminAcls 
> # refreshAll to update the configurations.
> I think we can skip refreshing configurations on ACTIVE RM. However admin 
> executes refresh command separately which indicates him failure if any.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-6304) Skip rm.transitionToActive call to RM if RM is already active.

2017-03-07 Thread Rohith Sharma K S (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith Sharma K S reassigned YARN-6304:
---

Assignee: Rohith Sharma K S

> Skip rm.transitionToActive call to RM if RM is already active. 
> ---
>
> Key: YARN-6304
> URL: https://issues.apache.org/jira/browse/YARN-6304
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
> Attachments: YARN-6304.0001.patch
>
>
> When elector elects RM to become active, even though RM is already in ACTIVE 
> state AdminService does refresh on following configurations. 
> #. refreshAdminAcls 
> # refreshAll to update the configurations.
> I think we can skip refreshing configurations on ACTIVE RM. However admin 
> executes refresh command separately which indicates him failure if any.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6207) Move application across queues should handle delayed event processing

2017-03-07 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6207?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-6207:
--
Summary: Move application across queues should handle delayed event 
processing  (was: Move application can  fail when attempt add event is delayed)

> Move application across queues should handle delayed event processing
> -
>
> Key: YARN-6207
> URL: https://issues.apache.org/jira/browse/YARN-6207
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
> Attachments: YARN-6207.001.patch, YARN-6207.002.patch, 
> YARN-6207.003.patch, YARN-6207.004.patch, YARN-6207.005.patch, 
> YARN-6207.006.patch, YARN-6207.007.patch, YARN-6207.008.patch
>
>
> *Steps to reproduce*
> 1.Submit application  and delay attempt add to Scheduler
> (Simulate using debug at EventDispatcher for SchedulerEventDispatcher)
> 2. Call move application to destination queue.
> {noformat}
> Caused by: 
> org.apache.hadoop.ipc.RemoteException(java.lang.NullPointerException): 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.preValidateMoveApplication(CapacityScheduler.java:2086)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.moveApplicationAcrossQueue(RMAppManager.java:669)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.moveApplicationAcrossQueues(ClientRMService.java:1231)
>   at 
> org.apache.hadoop.yarn.api.impl.pb.service.ApplicationClientProtocolPBServiceImpl.moveApplicationAcrossQueues(ApplicationClientProtocolPBServiceImpl.java:388)
>   at 
> org.apache.hadoop.yarn.proto.ApplicationClientProtocol$ApplicationClientProtocolService$2.callBlockingMethod(ApplicationClientProtocol.java:537)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:522)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:867)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:813)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1892)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2659)
>   at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1483)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1429)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1339)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:227)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:115)
>   at com.sun.proxy.$Proxy7.moveApplicationAcrossQueues(Unknown Source)
>   at 
> org.apache.hadoop.yarn.api.impl.pb.client.ApplicationClientProtocolPBClientImpl.moveApplicationAcrossQueues(ApplicationClientProtocolPBClientImpl.java:398)
>   ... 16 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6304) Skip rm.transitionToActive call to RM if RM is already active.

2017-03-07 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15900794#comment-15900794
 ] 

Rohith Sharma K S commented on YARN-6304:
-

cc :-/ [~kasha] [~xgong] [~jianhe] do you see any issue with it?

> Skip rm.transitionToActive call to RM if RM is already active. 
> ---
>
> Key: YARN-6304
> URL: https://issues.apache.org/jira/browse/YARN-6304
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
> Attachments: YARN-6304.0001.patch
>
>
> When elector elects RM to become active, even though RM is already in ACTIVE 
> state AdminService does refresh on following configurations. 
> #. refreshAdminAcls 
> # refreshAll to update the configurations.
> I think we can skip refreshing configurations on ACTIVE RM. However admin 
> executes refresh command separately which indicates him failure if any.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6275) Fail to show real-time tracking charts in SLS

2017-03-07 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15900249#comment-15900249
 ] 

Yufei Gu commented on YARN-6275:


Thanks [~rkanter] for the review and commit.

> Fail to show real-time tracking charts in SLS
> -
>
> Key: YARN-6275
> URL: https://issues.apache.org/jira/browse/YARN-6275
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler-load-simulator
>Affects Versions: 3.0.0-alpha2
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Fix For: 3.0.0-alpha3
>
> Attachments: YARN-6275.001.patch, YARN-6275.002.patch, 
> YARN-6275.003.patch, YARN-6275.004.patch
>
>
> # Not put {{html}} directory under the current working directory.
> # There is a bug in Class {{SLSWebApp}}, here is the stack trace:
> {code}
> java.lang.NullPointerException
>   at 
> org.eclipse.jetty.server.handler.ResourceHandler.handle(ResourceHandler.java:499)
>   at org.apache.hadoop.yarn.sls.web.SLSWebApp$1.handle(SLSWebApp.java:152)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
>   at org.eclipse.jetty.server.Server.handle(Server.java:524)
>   at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:319)
>   at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:253)
>   at 
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
>   at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
>   at 
> org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
>   at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
>   at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
>   at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)
>   at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4266) Allow whitelisted users to disable user re-mapping/squashing when launching docker containers

2017-03-07 Thread Sidharta Seethana (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15900256#comment-15900256
 ] 

Sidharta Seethana commented on YARN-4266:
-

Based on the discussions in this JIRA and on YARN-5360, it looks like all we 
have are less than ideal choices. Like I mentioned on YARN-5360, using the uid 
has readability issues and it still wouldn’t guarantee that an image would work 
correctly. In my opinion, we shouldn’t be adding *more* requirements on images 
- the whole objective of this jira was to remove a requirement where possible 
({{--user}}). launch_container.sh already uses  bash, ln, cp, chmod, ls, find. 
To this list we are considering adding usermod, su, getent and so on. In 
addition to this we are considering making (expensive) changes to a container 
prior to launching the application process - usermod only changes the files in 
a user’s home directory and even then we still have no way of predicting how 
long this operation would take - making application (process) launch time 
unpredictable. IMO, This is not the direction we should be going in.

In the interest of making some progress, perhaps we could add support for 
optionally using {{--user=:}}(turned off by default). A subset of 
images that wouldn’t otherwise work, would be usable because of this change - 
for example : images that don’t have the user being specified (say foo) but 
would otherwise work with an arbitrary user  (i.e the values supplied in 
{{--user=:}} don’t matter). 

I might have said this on other JIRAs and I’ll repeat here : docker containers 
and applications using them are just one category of workloads that are going 
to be run on a production YARN cluster. While we would like to use as much of 
the power and flexibility that docker provides, we have to do this with due 
consideration given to existing YARN/hadoop paradigms - security model 
(users/groups/permissions), localization, log aggregation and so on. 

> Allow whitelisted users to disable user re-mapping/squashing when launching 
> docker containers
> -
>
> Key: YARN-4266
> URL: https://issues.apache.org/jira/browse/YARN-4266
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Sidharta Seethana
>Assignee: Zhankun Tang
> Attachments: YARN-4266.001.patch, 
> YARN-4266_Allow_whitelisted_users_to_disable_user_re-mapping.pdf, 
> YARN-4266_Allow_whitelisted_users_to_disable_user_re-mapping_v2.pdf, 
> YARN-4266_Allow_whitelisted_users_to_disable_user_re-mapping_v3.pdf, 
> YARN-4266-branch-2.8.001.patch
>
>
> Docker provides a mechanism (the --user switch) that enables us to specify 
> the user the container processes should run as. We use this mechanism today 
> when launching docker containers . In non-secure mode, we run the docker 
> container based on 
> `yarn.nodemanager.linux-container-executor.nonsecure-mode.local-user` and in 
> secure mode, as the submitting user. However, this mechanism breaks down with 
> a large number of 'pre-created' images which don't necessarily have the users 
> available within the image. Examples of such images include shared images 
> that need to be used by multiple users. We need a way in which we can allow a 
> pre-defined set of users to run containers based on existing images, without 
> using the --user switch. There are some implications of disabling this user 
> squashing that we'll need to work through : log aggregation, artifact 
> deletion etc.,



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-3471) Fix timeline client retry

2017-03-07 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3471?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15899889#comment-15899889
 ] 

Haibo Chen edited comment on YARN-3471 at 3/7/17 6:24 PM:
--

Both 1) and 2) have been resolved after YARN-4675. 
(TimelineClientConnectionRetry is called in a Jersey client filter that only 
retries requests upon connection-related exceptions) Closing this. Feel free to 
reopen it if you disagree.


was (Author: haibochen):
Both 1) and 2) have been resolved after YARN-4675. 
(TimelineClientConnectionRetry is called in a Jersey client filter that only 
retries requests upon connection-related exceptions) Closing this.

> Fix timeline client retry
> -
>
> Key: YARN-3471
> URL: https://issues.apache.org/jira/browse/YARN-3471
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: 2.8.0
>Reporter: Zhijie Shen
>Assignee: Haibo Chen
>  Labels: YARN-5355
> Attachments: YARN-3471.1.patch, YARN-3471.2.patch
>
>
> I found that the client retry has some problems:
> 1. The new put methods will retry on all exception, but they should only do 
> it upon ConnectException.
> 2. We can reuse TimelineClientConnectionRetry to simplify the retry logic.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Resolved] (YARN-3471) Fix timeline client retry

2017-03-07 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3471?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen resolved YARN-3471.
--
Resolution: Not A Problem

> Fix timeline client retry
> -
>
> Key: YARN-3471
> URL: https://issues.apache.org/jira/browse/YARN-3471
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: 2.8.0
>Reporter: Zhijie Shen
>Assignee: Haibo Chen
>  Labels: YARN-5355
> Attachments: YARN-3471.1.patch, YARN-3471.2.patch
>
>
> I found that the client retry has some problems:
> 1. The new put methods will retry on all exception, but they should only do 
> it upon ConnectException.
> 2. We can reuse TimelineClientConnectionRetry to simplify the retry logic.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6296) ReservationId.compareTo ignores id when clustertimestamp is the same

2017-03-07 Thread Haibo Chen (JIRA)
Haibo Chen created YARN-6296:


 Summary: ReservationId.compareTo ignores id when clustertimestamp 
is the same
 Key: YARN-6296
 URL: https://issues.apache.org/jira/browse/YARN-6296
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 3.0.0-alpha2
Reporter: Haibo Chen


Per YARN-6252
{code:java}
  public int compareTo(ReservationId other) {
if (this.getClusterTimestamp() - other.getClusterTimestamp() == 0) {
  return getId() > getId() ? 1 : getId() < getId() ? -1 : 0;
} else {
}
  }
{code}
compares id with itself. It should be
return this.getId() > other.getId() ? 1 : (this.getId() < other.getId() ? -1 : 
0);





--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6050) AMs can't be scheduled on racks or nodes

2017-03-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15900270#comment-15900270
 ] 

Hadoop QA commented on YARN-6050:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 17 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 11m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  2m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
54s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  9m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  9m 
39s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 13s{color} | {color:orange} root: The patch generated 11 new + 1926 
unchanged - 5 fixed = 1937 total (was 1931) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m  
2s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
38s{color} | {color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager
 generated 2 new + 897 unchanged - 0 fixed = 899 total (was 897) {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
36s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
35s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 39m 
52s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}106m 
38s{color} | {color:green} hadoop-mapreduce-client-jobclient in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
30s{color} | {color:green} hadoop-sls in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  1m 
 1s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}242m 36s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-6050 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12856649/YARN-6050.009.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  cc  |
| uname | Linux 0096af72d9b0 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 

[jira] [Commented] (YARN-6287) RMCriticalThreadUncaughtExceptionHandler.rmContext should be final

2017-03-07 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1598#comment-1598
 ] 

Daniel Templeton commented on YARN-6287:


Thanks for the patch, [~ctrentbarker].  LGTM.  +1  I'll commit it when I get a 
chance.

> RMCriticalThreadUncaughtExceptionHandler.rmContext should be final
> --
>
> Key: YARN-6287
> URL: https://issues.apache.org/jira/browse/YARN-6287
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Affects Versions: 3.0.0-alpha2
>Reporter: Daniel Templeton
>Assignee: Corey Barker
>Priority: Minor
>  Labels: newbie
> Attachments: YARN-6287.001.patch
>
>
> {code}
>   private RMContext rmContext;
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6299) FairSharePolicy is off when demand is less than min share

2017-03-07 Thread Daniel Templeton (JIRA)
Daniel Templeton created YARN-6299:
--

 Summary: FairSharePolicy is off when demand is less than min share
 Key: YARN-6299
 URL: https://issues.apache.org/jira/browse/YARN-6299
 Project: Hadoop YARN
  Issue Type: Bug
  Components: fairscheduler
Affects Versions: 3.0.0-alpha2
Reporter: Daniel Templeton


{code}
  Resource resourceUsage1 = s1.getResourceUsage();
  Resource resourceUsage2 = s2.getResourceUsage();
  Resource minShare1 = Resources.min(RESOURCE_CALCULATOR, null,
  s1.getMinShare(), s1.getDemand());
  Resource minShare2 = Resources.min(RESOURCE_CALCULATOR, null,
  s2.getMinShare(), s2.getDemand());
  boolean s1Needy = Resources.lessThan(RESOURCE_CALCULATOR, null,
  resourceUsage1, minShare1);
  boolean s2Needy = Resources.lessThan(RESOURCE_CALCULATOR, null,
  resourceUsage2, minShare2);
  minShareRatio1 = (double) resourceUsage1.getMemorySize()
  / Resources.max(RESOURCE_CALCULATOR, null, minShare1, 
ONE).getMemorySize();
  minShareRatio2 = (double) resourceUsage2.getMemorySize()
  / Resources.max(RESOURCE_CALCULATOR, null, minShare2, 
ONE).getMemorySize();
{code}

If demand is less than min share, then an app will be flagged as needy if it 
has demand that is higher than its usage, which happens any time the app has 
been assigned resources that it hasn't started using yet.  That sounds wrong to 
me.  [~kasha], [~yufeigu]?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6298) Metric preemptCall is not used in new preemption.

2017-03-07 Thread Yufei Gu (JIRA)
Yufei Gu created YARN-6298:
--

 Summary: Metric preemptCall is not used in new preemption.
 Key: YARN-6298
 URL: https://issues.apache.org/jira/browse/YARN-6298
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: fairscheduler
Affects Versions: 3.0.0-alpha2, 2.8.0
Reporter: Yufei Gu
Assignee: Yufei Gu


Either get rid of it in Hadoop 3 or use it.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6294) ATS client should better handle Socket closed case

2017-03-07 Thread Li Lu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6294?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Lu updated YARN-6294:

Attachment: YARN-6294-trunk.001.patch
YARN-6294-branch-2.001.patch

Since trunk and branch-2 diverge on TimelineClientImpl, I've created two 
patches. We may probably want to focus our review effort on the trunk one, and 
then before commit we can finalize all changes and apply to branch-2. 

> ATS client should better handle Socket closed case
> --
>
> Key: YARN-6294
> URL: https://issues.apache.org/jira/browse/YARN-6294
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: timelineclient
>Reporter: Sumana Sathish
>Assignee: Li Lu
> Attachments: YARN-6294-branch-2.001.patch, YARN-6294-trunk.001.patch
>
>
> Exception stack:
> {noformat}
> 17/02/06 07:11:30 INFO distributedshell.ApplicationMaster: Container 
> completed successfully., containerId=container_1486362713048_0037_01_02
> 17/02/06 07:11:30 ERROR distributedshell.ApplicationMaster: Error in 
> RMCallbackHandler: 
> com.sun.jersey.api.client.ClientHandlerException: java.net.SocketException: 
> Socket closed
>   at 
> com.sun.jersey.client.urlconnection.URLConnectionClientHandler.handle(URLConnectionClientHandler.java:149)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl$TimelineJerseyRetryFilter$1.run(TimelineClientImpl.java:236)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl$TimelineClientConnectionRetry.retryOn(TimelineClientImpl.java:185)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl$TimelineJerseyRetryFilter.handle(TimelineClientImpl.java:248)
>   at com.sun.jersey.api.client.Client.handle(Client.java:648)
>   at com.sun.jersey.api.client.WebResource.handle(WebResource.java:670)
>   at com.sun.jersey.api.client.WebResource.access$200(WebResource.java:74)
>   at 
> com.sun.jersey.api.client.WebResource$Builder.post(WebResource.java:563)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TimelineWriter.doPostingObject(TimelineWriter.java:154)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TimelineWriter$1.run(TimelineWriter.java:115)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TimelineWriter$1.run(TimelineWriter.java:112)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1833)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TimelineWriter.doPosting(TimelineWriter.java:112)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TimelineWriter.putEntities(TimelineWriter.java:92)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl.putEntities(TimelineClientImpl.java:346)
>   at 
> org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.publishContainerEndEvent(ApplicationMaster.java:1145)
>   at 
> org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.access$400(ApplicationMaster.java:169)
>   at 
> org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster$RMCallbackHandler.onContainersCompleted(ApplicationMaster.java:779)
>   at 
> org.apache.hadoop.yarn.client.api.async.impl.AMRMClientAsyncImpl$CallbackHandlerThread.run(AMRMClientAsyncImpl.java:296)
> Caused by: java.net.SocketException: Socket closed
>   at java.net.SocketInputStream.read(SocketInputStream.java:204)
>   at java.net.SocketInputStream.read(SocketInputStream.java:141)
>   at java.io.BufferedInputStream.fill(BufferedInputStream.java:246)
>   at java.io.BufferedInputStream.read1(BufferedInputStream.java:286)
>   at java.io.BufferedInputStream.read(BufferedInputStream.java:345)
>   at sun.net.www.http.HttpClient.parseHTTPHeader(HttpClient.java:704)
>   at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:647)
>   at 
> sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1569)
>   at 
> sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1474)
>   at 
> java.net.HttpURLConnection.getResponseCode(HttpURLConnection.java:480)
>   at 
> com.sun.jersey.client.urlconnection.URLConnectionClientHandler._invoke(URLConnectionClientHandler.java:240)
>   at 
> com.sun.jersey.client.urlconnection.URLConnectionClientHandler.handle(URLConnectionClientHandler.java:147)
>   ... 20 more
> Exception in thread "AMRM Callback Handler Thread" 
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: 

[jira] [Resolved] (YARN-6252) Suspicious code fragments: comparing with itself

2017-03-07 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6252?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen resolved YARN-6252.
--
Resolution: Duplicate

> Suspicious code fragments: comparing with itself
> 
>
> Key: YARN-6252
> URL: https://issues.apache.org/jira/browse/YARN-6252
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha2
>Reporter: AppChecker
>Assignee: Haibo Chen
>
> Hi
> 1) 
> https://github.com/apache/hadoop/blob/235203dffda1482fb38762fde544c4dd9c3e1fa8/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ReservationId.java#L106
> {code:java}
>   return getId() > getId() ? 1 : getId() < getId() ? -1 : 0;
> {code}
> strangely, than getId() compare with itself
> Probably It should be something like this:
> {code:java}
>   return this.getId() > other.getId() ? 1 : this.getId() < other.getId() 
> ? -1 : 0;
> {code}
> 2) 
> https://github.com/apache/hadoop/blob/235203dffda1482fb38762fde544c4dd9c3e1fa8/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/TestAppLogAggregatorImpl.java#L260
> {code:java}
> if(filesUploaded.size() != filesUploaded.size()) {
>   fail(errMsgPrefix + ": actual size: " + filesUploaded.size() + " vs " +
>   "expected size: " + filesExpected.size());
> }
> {code}
> filesUploaded.size()  compare with it self
> probably it should be:
> {code:java}
> if(filesUploaded.size() != filesExpected.size()) {
>   fail(errMsgPrefix + ": actual size: " + filesUploaded.size() + " vs " +
>   "expected size: " + filesExpected.size());
> }
> {code}
> These possible defects found by [static code analyzer 
> AppChecker|https://cnpo.ru/en/solutions/appchecker.php] 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6294) ATS client should better handle Socket closed case

2017-03-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6294?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15900042#comment-15900042
 ] 

Hadoop QA commented on YARN-6294:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
30s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 23m 47s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-6294 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12856669/YARN-6294-trunk.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux fc28014ccbfe 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 959940b |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/15195/testReport/ |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/15195/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> ATS client should better handle Socket closed case
> --
>
> Key: YARN-6294
> URL: https://issues.apache.org/jira/browse/YARN-6294
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: timelineclient
>Reporter: Sumana Sathish
>Assignee: Li Lu
> Attachments: YARN-6294-branch-2.001.patch, YARN-6294-trunk.001.patch
>
>
> Exception 

[jira] [Commented] (YARN-3471) Fix timeline client retry

2017-03-07 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3471?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15899889#comment-15899889
 ] 

Haibo Chen commented on YARN-3471:
--

Both 1) and 2) have been resolved after YARN-4675. 
(TimelineClientConnectionRetry is called in a Jersey client filter that only 
retries requests upon connection-related exceptions) Closing this.

> Fix timeline client retry
> -
>
> Key: YARN-3471
> URL: https://issues.apache.org/jira/browse/YARN-3471
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: 2.8.0
>Reporter: Zhijie Shen
>Assignee: Haibo Chen
>  Labels: YARN-5355
> Attachments: YARN-3471.1.patch, YARN-3471.2.patch
>
>
> I found that the client retry has some problems:
> 1. The new put methods will retry on all exception, but they should only do 
> it upon ConnectException.
> 2. We can reuse TimelineClientConnectionRetry to simplify the retry logic.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6281) Cleanup when AMRMProxy fails to initialize a new interceptor chain

2017-03-07 Thread Botong Huang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6281?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Botong Huang updated YARN-6281:
---
Attachment: YARN-6281.v4.patch

> Cleanup when AMRMProxy fails to initialize a new interceptor chain
> --
>
> Key: YARN-6281
> URL: https://issues.apache.org/jira/browse/YARN-6281
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Botong Huang
>Assignee: Botong Huang
>Priority: Minor
> Attachments: YARN-6281.v1.patch, YARN-6281.v2.patch, 
> YARN-6281.v3.patch, YARN-6281.v4.patch
>
>
> When a app starts, AMRMProxy.initializePipeline creates a new Interceptor 
> chain and add it to its pipeline mapping. Then it initializes the chain and 
> return. The problem is that when the chain initialization throws (e.g. 
> because of configuration error, interceptor class not found etc.), the chain 
> is not removed from AMRMProxy's pipeline mapping. 
> This patch also contains misc log message fixes in AMRMProxy. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6296) ReservationId.compareTo ignores id when clustertimestamp is the same

2017-03-07 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6296?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen updated YARN-6296:
-
Description: 
Per YARN-6252
{code:java}
  public int compareTo(ReservationId other) {
if (this.getClusterTimestamp() - other.getClusterTimestamp() == 0) {
  return getId() > getId() ? 1 : getId() < getId() ? -1 : 0;
} else {
}
  }
{code}
compares id with itself. It should be
{code}
return this.getId() > other.getId() ? 1 : (this.getId() < other.getId() ? -1 : 
0);
{code}



  was:
Per YARN-6252
{code:java}
  public int compareTo(ReservationId other) {
if (this.getClusterTimestamp() - other.getClusterTimestamp() == 0) {
  return getId() > getId() ? 1 : getId() < getId() ? -1 : 0;
} else {
}
  }
{code}
compares id with itself. It should be
return this.getId() > other.getId() ? 1 : (this.getId() < other.getId() ? -1 : 
0);




> ReservationId.compareTo ignores id when clustertimestamp is the same
> 
>
> Key: YARN-6296
> URL: https://issues.apache.org/jira/browse/YARN-6296
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha2
>Reporter: Haibo Chen
>  Labels: newbie
>
> Per YARN-6252
> {code:java}
>   public int compareTo(ReservationId other) {
> if (this.getClusterTimestamp() - other.getClusterTimestamp() == 0) {
>   return getId() > getId() ? 1 : getId() < getId() ? -1 : 0;
> } else {
> }
>   }
> {code}
> compares id with itself. It should be
> {code}
> return this.getId() > other.getId() ? 1 : (this.getId() < other.getId() ? -1 
> : 0);
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6297) TestAppLogAggregatorImp.verifyFilesUploaded() should check # of filed uploaded with that of files expected

2017-03-07 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6297?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen updated YARN-6297:
-
Labels: test  (was: )

> TestAppLogAggregatorImp.verifyFilesUploaded() should check # of filed 
> uploaded with that of files expected
> --
>
> Key: YARN-6297
> URL: https://issues.apache.org/jira/browse/YARN-6297
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>  Labels: test
>
> Per YARN-6252
> {code:java}
>   private static void verifyFilesUploaded(Set filesUploaded,
>   Set filesExpected) {
> final String errMsgPrefix = "The set of files uploaded are not the same " 
> +
> "as expected";
> if(filesUploaded.size() != filesUploaded.size()) {
>   fail(errMsgPrefix + ": actual size: " + filesUploaded.size() + " vs " +
>   "expected size: " + filesExpected.size());
> }
> for(String file: filesExpected) {
>   if(!filesUploaded.contains(file)) {
> fail(errMsgPrefix + ": expecting " + file);
>   }
> }
>   }
> {code}
> should check the number of files uploaded against the number of files 
> expected.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6297) TestAppLogAggregatorImp.verifyFilesUploaded() should check # of filed uploaded with that of files expected

2017-03-07 Thread Haibo Chen (JIRA)
Haibo Chen created YARN-6297:


 Summary: TestAppLogAggregatorImp.verifyFilesUploaded() should 
check # of filed uploaded with that of files expected
 Key: YARN-6297
 URL: https://issues.apache.org/jira/browse/YARN-6297
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Haibo Chen
Assignee: Haibo Chen


Per YARN-6252
{code:java}
  private static void verifyFilesUploaded(Set filesUploaded,
  Set filesExpected) {
final String errMsgPrefix = "The set of files uploaded are not the same " +
"as expected";
if(filesUploaded.size() != filesUploaded.size()) {
  fail(errMsgPrefix + ": actual size: " + filesUploaded.size() + " vs " +
  "expected size: " + filesExpected.size());
}
for(String file: filesExpected) {
  if(!filesUploaded.contains(file)) {
fail(errMsgPrefix + ": expecting " + file);
  }
}
  }
{code}
should check the number of files uploaded against the number of files expected.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6281) Cleanup when AMRMProxy fails to initialize a new interceptor chain

2017-03-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15900029#comment-15900029
 ] 

Hadoop QA commented on YARN-6281:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 13m 
18s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 34m 30s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-6281 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12856660/YARN-6281.v4.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 19a72838918e 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 
09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 959940b |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/15194/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/15194/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Cleanup when AMRMProxy fails to initialize a new interceptor chain
> --
>
> Key: YARN-6281
> URL: https://issues.apache.org/jira/browse/YARN-6281
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Botong Huang
>Assignee: Botong Huang
>Priority: Minor
> Attachments: YARN-6281.v1.patch, YARN-6281.v2.patch, 
> YARN-6281.v3.patch, YARN-6281.v4.patch
>
>
> When a 

[jira] [Created] (YARN-6301) Fair scheduler docs should explain the meaning of setting a queue's weight to zero

2017-03-07 Thread Daniel Templeton (JIRA)
Daniel Templeton created YARN-6301:
--

 Summary: Fair scheduler docs should explain the meaning of setting 
a queue's weight to zero
 Key: YARN-6301
 URL: https://issues.apache.org/jira/browse/YARN-6301
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: fairscheduler
Affects Versions: 3.0.0-alpha2
Reporter: Daniel Templeton






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6287) RMCriticalThreadUncaughtExceptionHandler.rmContext should be final

2017-03-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15899930#comment-15899930
 ] 

Hadoop QA commented on YARN-6287:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
26s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 39m 
26s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 61m 25s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-6287 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12856640/YARN-6287.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 3fc270ace67e 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / f597f4c |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/15191/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/15191/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> RMCriticalThreadUncaughtExceptionHandler.rmContext should be final
> --
>
> Key: YARN-6287
> URL: https://issues.apache.org/jira/browse/YARN-6287
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Affects Versions: 3.0.0-alpha2
>   

[jira] [Commented] (YARN-6252) Suspicious code fragments: comparing with itself

2017-03-07 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1585#comment-1585
 ] 

Haibo Chen commented on YARN-6252:
--

Filed YARN-6296 and YARN-6297 to fix the two issues reported. 

> Suspicious code fragments: comparing with itself
> 
>
> Key: YARN-6252
> URL: https://issues.apache.org/jira/browse/YARN-6252
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha2
>Reporter: AppChecker
>Assignee: Haibo Chen
>
> Hi
> 1) 
> https://github.com/apache/hadoop/blob/235203dffda1482fb38762fde544c4dd9c3e1fa8/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ReservationId.java#L106
> {code:java}
>   return getId() > getId() ? 1 : getId() < getId() ? -1 : 0;
> {code}
> strangely, than getId() compare with itself
> Probably It should be something like this:
> {code:java}
>   return this.getId() > other.getId() ? 1 : this.getId() < other.getId() 
> ? -1 : 0;
> {code}
> 2) 
> https://github.com/apache/hadoop/blob/235203dffda1482fb38762fde544c4dd9c3e1fa8/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/TestAppLogAggregatorImpl.java#L260
> {code:java}
> if(filesUploaded.size() != filesUploaded.size()) {
>   fail(errMsgPrefix + ": actual size: " + filesUploaded.size() + " vs " +
>   "expected size: " + filesExpected.size());
> }
> {code}
> filesUploaded.size()  compare with it self
> probably it should be:
> {code:java}
> if(filesUploaded.size() != filesExpected.size()) {
>   fail(errMsgPrefix + ": actual size: " + filesUploaded.size() + " vs " +
>   "expected size: " + filesExpected.size());
> }
> {code}
> These possible defects found by [static code analyzer 
> AppChecker|https://cnpo.ru/en/solutions/appchecker.php] 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6237) Move UID constant into TimelineReaderUtils

2017-03-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1593#comment-1593
 ] 

Hadoop QA commented on YARN-6237:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
55s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
53s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
32s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
43s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
32s{color} | {color:green} YARN-5355 passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests
 {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
29s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} YARN-5355 passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests
 {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
48s{color} | {color:green} hadoop-yarn-server-timelineservice in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
43s{color} | {color:green} hadoop-yarn-server-timelineservice-hbase-tests in 
the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 28m 36s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | YARN-6237 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12856237/YARN-6237-YARN-5355.0001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 54ed8ff4e45e 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | YARN-5355 / 5d9ad15 |
| Default Java | 1.8.0_121 |
| 

[jira] [Commented] (YARN-6237) Move UID constant into TimelineReaderUtils

2017-03-07 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15900014#comment-15900014
 ] 

Varun Saxena commented on YARN-6237:


Clean build. Will commit it shortly.

> Move UID constant into TimelineReaderUtils
> --
>
> Key: YARN-6237
> URL: https://issues.apache.org/jira/browse/YARN-6237
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelinereader
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>  Labels: newbie
> Attachments: YARN-6237-YARN-5355.0001.patch
>
>
> UID constant is kept in TimelineReader Manager. This can be moved to 
> TimelineReaderUtils which can keep track of all reader constants. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6300) NULL_UPDATE_REQUESTS is redundant in TestFairScheduler

2017-03-07 Thread Daniel Templeton (JIRA)
Daniel Templeton created YARN-6300:
--

 Summary: NULL_UPDATE_REQUESTS is redundant in TestFairScheduler
 Key: YARN-6300
 URL: https://issues.apache.org/jira/browse/YARN-6300
 Project: Hadoop YARN
  Issue Type: Improvement
Affects Versions: 3.0.0-alpha2
Reporter: Daniel Templeton
Priority: Minor


The {{TestFairScheduler.NULL_UPDATE_REQUESTS}} field hides 
{{FairSchedulerTestBase.NULL_UPDATE_REQUESTS}}, which has the same value.  The 
{{NULL_UPDATE_REQUESTS}} field should be removed from {{TestFairScheduler}}.

While you're at it, maybe also remove the unused import.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org