[jira] [Commented] (YARN-9268) General improvements in FpgaDevice

2019-03-18 Thread Devaraj K (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16795671#comment-16795671
 ] 

Devaraj K commented on YARN-9268:
-

Thanks [~pbacsko] for the patch, latest patch is not applying to trunk, please 
update it.

> General improvements in FpgaDevice
> --
>
> Key: YARN-9268
> URL: https://issues.apache.org/jira/browse/YARN-9268
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Peter Bacsko
>Assignee: Peter Bacsko
>Priority: Major
> Attachments: YARN-9268-001.patch, YARN-9268-002.patch, 
> YARN-9268-003.patch
>
>
> Need to fix the following in the class {{FpgaDevice}}:
>  * It implements {{Comparable}}, but returns 0 in every case. There is no 
> natural ordering among FPGA devices, perhaps "acl0" comes before "acl1", but 
> this seems too forced and unnecessary.We think this class should not 
> implement {{Comparable}} at all, at least not like that.
>  * Stores unnecessary fields: devName, busNum, temperature, power usage. For 
> one, these are never needed in the code. Secondly, temp and power usage 
> changes constantly. It's pointless to store these in this POJO.
>  * {{serialVersionUID}} is 1L - let's generate a number for this
>  * Use {{int}} instead of {{Integer}} - don't allow nulls. If major/minor 
> uniquely identifies the card, then let's demand them in the constructor and 
> don't store Integers that can be null.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9267) General improvements in FpgaResourceHandlerImpl

2019-03-18 Thread Devaraj K (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9267?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16795669#comment-16795669
 ] 

Devaraj K commented on YARN-9267:
-

Thanks [~pbacsko] for the patch, latest patch has gone stale, can you update 
the patch?

> General improvements in FpgaResourceHandlerImpl
> ---
>
> Key: YARN-9267
> URL: https://issues.apache.org/jira/browse/YARN-9267
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Peter Bacsko
>Assignee: Peter Bacsko
>Priority: Major
> Attachments: YARN-9267-001.patch, YARN-9267-002.patch, 
> YARN-9267-003.patch, YARN-9267-004.patch, YARN-9267-005.patch, 
> YARN-9267-006.patch
>
>
> Fix some problems in {{FpgaResourceHandlerImpl}}:
>  * {{preStart()}} does not reconfigure card with the same IP - we see it as a 
> problem. If you recompile the FPGA application, you must rename the aocx file 
> because the card will not be reprogrammed. Suggestion: instead of storing 
> Node<\->IPID mapping, store Node<\->IPID hash (like the SHA-256 of the 
> localized file).
>  * Switch to slf4j from Apache Commons Logging
>  * Some unused imports



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9270) Minor cleanup in TestFpgaDiscoverer

2019-03-18 Thread Devaraj K (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16795675#comment-16795675
 ] 

Devaraj K commented on YARN-9270:
-

Thanks [~pbacsko] for the patch, latest patch is not getting applied, please 
update it.

> Minor cleanup in TestFpgaDiscoverer
> ---
>
> Key: YARN-9270
> URL: https://issues.apache.org/jira/browse/YARN-9270
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Peter Bacsko
>Assignee: Peter Bacsko
>Priority: Major
> Attachments: YARN-9270-001.patch, YARN-9270-002.patch, 
> YARN-9270-003.patch
>
>
> Let's do some cleanup in this class.
> * {{testLinuxFpgaResourceDiscoverPluginConfig}} - this test should be split 
> up to 5 different tests, because it tests 5 different scenarios.
> * remove {{setNewEnvironmentHack()}} - too complicated. We can introduce a 
> {{Function}} in the plugin class like {{Function envProvider 
> = System::getenv()}} plus a setter method which allows the test to modify 
> {{envProvider}}. Much simpler and straightfoward.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9269) Minor cleanup in FpgaResourceAllocator

2019-03-18 Thread Devaraj K (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9269?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16795674#comment-16795674
 ] 

Devaraj K commented on YARN-9269:
-

Thanks [~pbacsko] for the patch, latest patch is not getting applied, please 
update it.

> Minor cleanup in FpgaResourceAllocator
> --
>
> Key: YARN-9269
> URL: https://issues.apache.org/jira/browse/YARN-9269
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Peter Bacsko
>Assignee: Peter Bacsko
>Priority: Major
> Attachments: YARN-9269-001.patch, YARN-9269-002.patch, 
> YARN-9269-003.patch
>
>
> Some stuff that we observed:
>  * {{addFpga()}} - we check for duplicate devices, but we don't print any 
> error/warning if there's any.
>  * {{findMatchedFpga()}} should be called {{findMatchingFpga()}}. Also, is 
> this method even needed? We already receive an {{FpgaDevice}} instance in 
> {{updateFpga()}} which I believe is the same that we're looking up.
>  * variable {{IPIDpreference}} is confusing
>  * {{availableFpga}} / {{usedFpgaByRequestor}} are instances of 
> {{LinkedHashMap}}. What's the rationale behind this? Doesn't a simple 
> {{HashMap}} suffice?
>  * {{usedFpgaByRequestor}} should be renamed, naming is a bit unclear
>  * {{allowedFpgas}} should be an immutable list
>  * {{@VisibleForTesting}} methods should be package private
>  * get rid of {{*}} imports



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9365) fix wrong command in TimelineServiceV2.md

2019-03-18 Thread Rohith Sharma K S (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16795654#comment-16795654
 ] 

Rohith Sharma K S commented on YARN-9365:
-

+1 committing shortly.

> fix wrong command in TimelineServiceV2.md 
> --
>
> Key: YARN-9365
> URL: https://issues.apache.org/jira/browse/YARN-9365
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: timelineserver
>Affects Versions: 3.2.0, 3.1.1
>Reporter: Runlin Zhang
>Assignee: Runlin Zhang
>Priority: Major
> Attachments: YARN-9365.patch
>
>
> In TimelineServiceV2.md  255,The step to  create  the timeline service schema 
> does  not work
>  
> {noformat}
> Finally, run the schema creator tool to create the necessary tables:
> bin/hadoop 
> org.apache.hadoop.yarn.server.timelineservice.storage.TimelineSchemaCreator 
> -create{noformat}
>  
> should be
>  
> {noformat}
> The schema creation can be run on the hbase cluster which is going to store 
> the timeline
> service tables. The schema creator tool requires both the 
> timelineservice-hbase as well
> as the hbase-server jars. Hence, during schema creation, you need to ensure 
> that the
> hbase classpath contains the yarn-timelineservice-hbase jar.
> On the hbase cluster, you can get it from hdfs since we placed it there for 
> the
> coprocessor in the step above.
> ```
>hadoop fs -get 
> /hbase/coprocessor/hadoop-yarn-server-timelineservice-hbase-client-${project.version}.jar
>hadoop fs -get 
> /hbase/coprocessor/hadoop-yarn-server-timelineservice-${project.version}.jar
>hadoop fs -get 
> /hbase/coprocessor/hadoop-yarn-server-timelineservice-hbase-common-${project.version}.jar
>  /.
> ```
> Next, add it to the hbase classpath as follows:
> ```
>export 
> HBASE_CLASSPATH=$HBASE_CLASSPATH:/home/yarn/hadoop-current/share/hadoop/yarn/timelineservice/hadoop-yarn-server-timelineservice-hbase-client-${project.version}.jar
>export 
> HBASE_CLASSPATH=$HBASE_CLASSPATH:/home/yarn/hadoop-current/share/hadoop/yarn/timelineservice/hadoop-yarn-server-timelineservice-${project.version}.jar
>export 
> HBASE_CLASSPATH=$HBASE_CLASSPATH:/home/yarn/hadoop-current/share/hadoop/yarn/timelineservice/hadoop-yarn-server-timelineservice-hbase-common-${project.version}.jar
> ```
> Finally, run the schema creator tool to create the necessary tables:
> ```
> bin/hbase 
> org.apache.hadoop.yarn.server.timelineservice.storage.TimelineSchemaCreator 
> -create
> ```{noformat}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8967) Change FairScheduler to use PlacementRule interface

2019-03-18 Thread Wilfred Spiegelenburg (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16795640#comment-16795640
 ] 

Wilfred Spiegelenburg commented on YARN-8967:
-

Cleaned up the checkstyle issues and fixed the junit test failures.
Also removed a partial diff that crept in from YARN-9314.

> Change FairScheduler to use PlacementRule interface
> ---
>
> Key: YARN-8967
> URL: https://issues.apache.org/jira/browse/YARN-8967
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacityscheduler, fairscheduler
>Reporter: Wilfred Spiegelenburg
>Assignee: Wilfred Spiegelenburg
>Priority: Major
> Attachments: YARN-8967.001.patch, YARN-8967.002.patch, 
> YARN-8967.003.patch, YARN-8967.004.patch, YARN-8967.005.patch, 
> YARN-8967.006.patch, YARN-8967.007.patch, YARN-8967.008.patch, 
> YARN-8967.009.patch
>
>
> The PlacementRule interface was introduced to be used by all schedulers as 
> per YARN-3635. The CapacityScheduler is using it but the FairScheduler is not 
> and is using its own rule definition.
> YARN-8948 cleans up the implementation and removes the CS references which 
> should allow this change to go through.
> This would be the first step in using one placement rule engine for both 
> schedulers.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8967) Change FairScheduler to use PlacementRule interface

2019-03-18 Thread Wilfred Spiegelenburg (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8967?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wilfred Spiegelenburg updated YARN-8967:

Attachment: YARN-8967.009.patch

> Change FairScheduler to use PlacementRule interface
> ---
>
> Key: YARN-8967
> URL: https://issues.apache.org/jira/browse/YARN-8967
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacityscheduler, fairscheduler
>Reporter: Wilfred Spiegelenburg
>Assignee: Wilfred Spiegelenburg
>Priority: Major
> Attachments: YARN-8967.001.patch, YARN-8967.002.patch, 
> YARN-8967.003.patch, YARN-8967.004.patch, YARN-8967.005.patch, 
> YARN-8967.006.patch, YARN-8967.007.patch, YARN-8967.008.patch, 
> YARN-8967.009.patch
>
>
> The PlacementRule interface was introduced to be used by all schedulers as 
> per YARN-3635. The CapacityScheduler is using it but the FairScheduler is not 
> and is using its own rule definition.
> YARN-8948 cleans up the implementation and removes the CS references which 
> should allow this change to go through.
> This would be the first step in using one placement rule engine for both 
> schedulers.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4498) Application level node labels stats to be available in REST

2019-03-18 Thread Runlin Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-4498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Runlin Zhang updated YARN-4498:
---
Description: 
* Currently nodelabel stats per application is not available through REST like 
currently used labels by all live containers, total stats of containers per 
label for app etc..

CLI and web UI scenarios will be handled separately.

  was:
Currently nodelabel stats per application is not available through REST like 
currently used labels by all live containers, total stats of containers per 
label for app etc..

CLI and web UI scenarios will be handled separately.


> Application level node labels stats to be available in REST
> ---
>
> Key: YARN-4498
> URL: https://issues.apache.org/jira/browse/YARN-4498
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api, client, resourcemanager
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>Priority: Major
>  Labels: oct16-medium
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: 0001-YARN-4498.patch, YARN-4498.0002.patch, 
> YARN-4498.0003.patch, YARN-4498.0004.patch, YARN-4498.addendum.001.patch, 
> YARN-4498.branch-2.8.0001.patch, YARN-4498.branch-2.8.addendum.001.patch, 
> apps.xml
>
>
> * Currently nodelabel stats per application is not available through REST 
> like currently used labels by all live containers, total stats of containers 
> per label for app etc..
> CLI and web UI scenarios will be handled separately.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-9398) Javadoc error on FPGA related java files

2019-03-18 Thread Prabhu Joseph (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph reassigned YARN-9398:
---

Assignee: (was: Prabhu Joseph)

> Javadoc error on FPGA related java files
> 
>
> Key: YARN-9398
> URL: https://issues.apache.org/jira/browse/YARN-9398
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Yang
>Priority: Major
>
> {code}
> [ERROR] 
> /home/eyang/test/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/resourceplugin/fpga/AbstractFpgaVendorPlugin.java:46:
>  warning: no @param for conf
> [ERROR]   boolean initPlugin(Configuration conf);
> [ERROR]   ^
> [ERROR] 
> /home/eyang/test/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/resourceplugin/fpga/AbstractFpgaVendorPlugin.java:46:
>  warning: no @return
> [ERROR]   boolean initPlugin(Configuration conf);
> [ERROR]   ^
> [ERROR] 
> /home/eyang/test/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/resourceplugin/fpga/AbstractFpgaVendorPlugin.java:51:
>  warning: no @param for timeout
> [ERROR]   boolean diagnose(int timeout);
> [ERROR]   ^
> [ERROR] 
> /home/eyang/test/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/resourceplugin/fpga/AbstractFpgaVendorPlugin.java:51:
>  warning: no @return
> [ERROR]   boolean diagnose(int timeout);
> [ERROR]   ^
> [ERROR] 
> /home/eyang/test/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/resourceplugin/fpga/AbstractFpgaVendorPlugin.java:64:
>  warning: no @return
> [ERROR]   String getFpgaType();
> [ERROR]  ^
> [ERROR] 
> /home/eyang/test/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/resourceplugin/fpga/FpgaDiscoverer.java:119:
>  warning: no @return
> [ERROR]   public List discover()
> [ERROR] ^
> [ERROR] 
> /home/eyang/test/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/resourceplugin/fpga/FpgaDiscoverer.java:119:
>  warning: no @throws for 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.resources.ResourceHandlerException
> [ERROR]   public List discover()
> [ERROR] ^
> [ERROR] 
> /home/eyang/test/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/resourceplugin/fpga/IntelFpgaOpenclPlugin.java:156:
>  error: bad HTML entity
> [ERROR]*  Helper class to run aocl diagnose & determine major/minor 
> numbers.
> {code}
> YARN-9266 introduced some javadoc compilation errors.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-9398) Javadoc error on FPGA related java files

2019-03-18 Thread Prabhu Joseph (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph reassigned YARN-9398:
---

Assignee: Prabhu Joseph

> Javadoc error on FPGA related java files
> 
>
> Key: YARN-9398
> URL: https://issues.apache.org/jira/browse/YARN-9398
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Yang
>Assignee: Prabhu Joseph
>Priority: Major
>
> {code}
> [ERROR] 
> /home/eyang/test/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/resourceplugin/fpga/AbstractFpgaVendorPlugin.java:46:
>  warning: no @param for conf
> [ERROR]   boolean initPlugin(Configuration conf);
> [ERROR]   ^
> [ERROR] 
> /home/eyang/test/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/resourceplugin/fpga/AbstractFpgaVendorPlugin.java:46:
>  warning: no @return
> [ERROR]   boolean initPlugin(Configuration conf);
> [ERROR]   ^
> [ERROR] 
> /home/eyang/test/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/resourceplugin/fpga/AbstractFpgaVendorPlugin.java:51:
>  warning: no @param for timeout
> [ERROR]   boolean diagnose(int timeout);
> [ERROR]   ^
> [ERROR] 
> /home/eyang/test/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/resourceplugin/fpga/AbstractFpgaVendorPlugin.java:51:
>  warning: no @return
> [ERROR]   boolean diagnose(int timeout);
> [ERROR]   ^
> [ERROR] 
> /home/eyang/test/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/resourceplugin/fpga/AbstractFpgaVendorPlugin.java:64:
>  warning: no @return
> [ERROR]   String getFpgaType();
> [ERROR]  ^
> [ERROR] 
> /home/eyang/test/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/resourceplugin/fpga/FpgaDiscoverer.java:119:
>  warning: no @return
> [ERROR]   public List discover()
> [ERROR] ^
> [ERROR] 
> /home/eyang/test/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/resourceplugin/fpga/FpgaDiscoverer.java:119:
>  warning: no @throws for 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.resources.ResourceHandlerException
> [ERROR]   public List discover()
> [ERROR] ^
> [ERROR] 
> /home/eyang/test/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/resourceplugin/fpga/IntelFpgaOpenclPlugin.java:156:
>  error: bad HTML entity
> [ERROR]*  Helper class to run aocl diagnose & determine major/minor 
> numbers.
> {code}
> YARN-9266 introduced some javadoc compilation errors.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9364) Remove commons-logging dependency from remaining hadoop-yarn

2019-03-18 Thread Prabhu Joseph (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16795582#comment-16795582
 ] 

Prabhu Joseph commented on YARN-9364:
-

Thanks [~eyang].

> Remove commons-logging dependency from remaining hadoop-yarn
> 
>
> Key: YARN-9364
> URL: https://issues.apache.org/jira/browse/YARN-9364
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 3.2.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: YARN-9364-001.patch
>
>
> YARN-6712 removes the usage of commons-logging dependency. The dependency can 
> be removed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9340) [Clean-up] Remove NULL check before instanceof in ResourceRequestSetKey

2019-03-18 Thread Shweta (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16795561#comment-16795561
 ] 

Shweta commented on YARN-9340:
--

Thanks for the commit [~templedf]

> [Clean-up] Remove NULL check before instanceof in ResourceRequestSetKey
> ---
>
> Key: YARN-9340
> URL: https://issues.apache.org/jira/browse/YARN-9340
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: yarn
>Reporter: Shweta
>Assignee: Shweta
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: YARN-9340.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-9399) Yarn Client may use stale DNS to connect to RM

2019-03-18 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/YARN-9399?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri reassigned YARN-9399:
-

Assignee: Íñigo Goiri

> Yarn Client may use stale DNS to connect to RM
> --
>
> Key: YARN-9399
> URL: https://issues.apache.org/jira/browse/YARN-9399
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.9.1
>Reporter: Leon zhang
>Assignee: Íñigo Goiri
>Priority: Major
>  Labels: patch
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> This happens more frequently when running yarn in Kubernetes. When yarn 
> client try to connect to RM, if the DNS of RM is not resovable due to 
> kube-dns failure or not ready, the yarn client will initaize itself with 
> unresoved InetSocketAddress in RMProxy#newProxyInstance(). The connect to RM 
> will fail with UnknownHostException. Yarn client will retry the connection by 
> RetryProxy by it always use the cached unresolved InetSocketAddress. The 
> retry will never success. When RM is reschdured to another kubernetes node, 
> which changed the RM ip, this bug will also happen. Currently the work around 
> is to restarting the Yarn client. 
> This issue happens in both HA and non-HA of RM. HDFS has simialr issues. 
> [https://github.com/apache-spark-on-k8s/kubernetes-HDFS/issues/48]
> I propose to add a new RMFailoverProxyProvider called 
> AutoRefreshRMFailoverProxyProvider which will resove the DNS in the 
> overwriten function getProxy(). This way, RetryProxy can resolve the DNS each 
> time it retry. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9399) Yarn Client may use stale DNS to connect to RM

2019-03-18 Thread JIRA


[ 
https://issues.apache.org/jira/browse/YARN-9399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16795559#comment-16795559
 ] 

Íñigo Goiri commented on YARN-9399:
---

Moved from HDFS to YARN. 

> Yarn Client may use stale DNS to connect to RM
> --
>
> Key: YARN-9399
> URL: https://issues.apache.org/jira/browse/YARN-9399
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.9.1
>Reporter: Leon zhang
>Priority: Major
>  Labels: patch
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> This happens more frequently when running yarn in Kubernetes. When yarn 
> client try to connect to RM, if the DNS of RM is not resovable due to 
> kube-dns failure or not ready, the yarn client will initaize itself with 
> unresoved InetSocketAddress in RMProxy#newProxyInstance(). The connect to RM 
> will fail with UnknownHostException. Yarn client will retry the connection by 
> RetryProxy by it always use the cached unresolved InetSocketAddress. The 
> retry will never success. When RM is reschdured to another kubernetes node, 
> which changed the RM ip, this bug will also happen. Currently the work around 
> is to restarting the Yarn client. 
> This issue happens in both HA and non-HA of RM. HDFS has simialr issues. 
> [https://github.com/apache-spark-on-k8s/kubernetes-HDFS/issues/48]
> I propose to add a new RMFailoverProxyProvider called 
> AutoRefreshRMFailoverProxyProvider which will resove the DNS in the 
> overwriten function getProxy(). This way, RetryProxy can resolve the DNS each 
> time it retry. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Moved] (YARN-9399) Yarn Client may use stale DNS to connect to RM

2019-03-18 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/YARN-9399?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri moved HDFS-14376 to YARN-9399:
--

Affects Version/s: (was: 2.9.1)
   2.9.1
 Target Version/s:   (was: 3.1.0, 2.9.1)
  Component/s: (was: caching)
  Key: YARN-9399  (was: HDFS-14376)
  Project: Hadoop YARN  (was: Hadoop HDFS)

> Yarn Client may use stale DNS to connect to RM
> --
>
> Key: YARN-9399
> URL: https://issues.apache.org/jira/browse/YARN-9399
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.9.1
>Reporter: Leon zhang
>Priority: Major
>  Labels: patch
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> This happens more frequently when running yarn in Kubernetes. When yarn 
> client try to connect to RM, if the DNS of RM is not resovable due to 
> kube-dns failure or not ready, the yarn client will initaize itself with 
> unresoved InetSocketAddress in RMProxy#newProxyInstance(). The connect to RM 
> will fail with UnknownHostException. Yarn client will retry the connection by 
> RetryProxy by it always use the cached unresolved InetSocketAddress. The 
> retry will never success. When RM is reschdured to another kubernetes node, 
> which changed the RM ip, this bug will also happen. Currently the work around 
> is to restarting the Yarn client. 
> This issue happens in both HA and non-HA of RM. HDFS has simialr issues. 
> [https://github.com/apache-spark-on-k8s/kubernetes-HDFS/issues/48]
> I propose to add a new RMFailoverProxyProvider called 
> AutoRefreshRMFailoverProxyProvider which will resove the DNS in the 
> overwriten function getProxy(). This way, RetryProxy can resolve the DNS each 
> time it retry. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-9291) Backport YARN-7637 to branch-2

2019-03-18 Thread Jonathan Hung (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16795508#comment-16795508
 ] 

Jonathan Hung edited comment on YARN-9291 at 3/19/19 12:56 AM:
---

Test failure was fixed as part of YARN-9397. Checkstyle failure is part of 
original patch.


was (Author: jhung):
Test failure was fixed as part of YARN-9397.

> Backport YARN-7637 to branch-2
> --
>
> Key: YARN-9291
> URL: https://issues.apache.org/jira/browse/YARN-9291
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
>Priority: Major
> Attachments: YARN-9291-YARN-8200.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9397) Fix empty NMResourceInfo object test failures in branch-2

2019-03-18 Thread Zhe Zhang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16795511#comment-16795511
 ] 

Zhe Zhang commented on YARN-9397:
-

+1, looks like a clean fix.

> Fix empty NMResourceInfo object test failures in branch-2
> -
>
> Key: YARN-9397
> URL: https://issues.apache.org/jira/browse/YARN-9397
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
>Priority: Major
> Attachments: YARN-9397-YARN-8200.001.patch
>
>
> Appears the empty object handling behavior changed in jersey versions 
> (branch-2 is on jersey 1.9, branch-3 on 1.19).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-9335) [atsv2] Restrict the number of elements held in NM timeline collector when backend is unreachable for async calls

2019-03-18 Thread Anil Sadineni (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16795512#comment-16795512
 ] 

Anil Sadineni edited comment on YARN-9335 at 3/19/19 12:31 AM:
---

[~abmodi] I observed small correction needed in yarn-default.xml file. queue 
capacity key name has 'writer' repeated twice. 
{quote}
 The setting that decides the capacity of the queue to hold
 asynchronous timeline entities.
 yarn.timeline-service.writer.writer.async.queue.capacity
 100

{quote}


was (Author: sadineni):
[~abmodi] I observed small correction needed in yarn-default.xml file. queue 
capacity key name has 'writer' repeated twice. 

{{}}
{{ The setting that decides the capacity of the queue to hold}}
{{ asynchronous timeline entities.}}
{{ yarn.timeline-service.-writer-.writer.async.queue.capacity}}
{{ 100}}
{{}}

> [atsv2] Restrict the number of elements held in NM timeline collector when 
> backend is unreachable for async calls
> -
>
> Key: YARN-9335
> URL: https://issues.apache.org/jira/browse/YARN-9335
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Vrushali C
>Assignee: Abhishek Modi
>Priority: Major
> Attachments: YARN-9335.001.patch, YARN-9335.002.patch
>
>
> For ATSv2 , if the backend is unreachable, the number/size of data held in 
> timeline collector's memory increases significantly. This is not good for the 
> NM memory. 
> Filing jira to set a limit on how many/much should be retained by the 
> timeline collector in memory in case the backend is not reachable.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-9335) [atsv2] Restrict the number of elements held in NM timeline collector when backend is unreachable for async calls

2019-03-18 Thread Anil Sadineni (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16795512#comment-16795512
 ] 

Anil Sadineni edited comment on YARN-9335 at 3/19/19 12:28 AM:
---

[~abmodi] I observed small correction needed in yarn-default.xml file. queue 
capacity key name has 'writer' repeated twice. 

{{}}
{{ The setting that decides the capacity of the queue to hold}}
{{ asynchronous timeline entities.}}
{{ yarn.timeline-service.-writer-.writer.async.queue.capacity}}
{{ 100}}
{{}}


was (Author: sadineni):
[~abmodi] I observed small correction needed in yarn-default.xml file. queue 
capacity key name has 'writer' repeated twice. 
{quote}{{}}
{{ The setting that decides the capacity of the queue to hold}}
{{ asynchronous timeline entities.}}
{{ yarn.timeline-service.-writer.-writer.async.queue.capacity}}
{{ 100}}
{{}}
{quote}
 

> [atsv2] Restrict the number of elements held in NM timeline collector when 
> backend is unreachable for async calls
> -
>
> Key: YARN-9335
> URL: https://issues.apache.org/jira/browse/YARN-9335
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Vrushali C
>Assignee: Abhishek Modi
>Priority: Major
> Attachments: YARN-9335.001.patch, YARN-9335.002.patch
>
>
> For ATSv2 , if the backend is unreachable, the number/size of data held in 
> timeline collector's memory increases significantly. This is not good for the 
> NM memory. 
> Filing jira to set a limit on how many/much should be retained by the 
> timeline collector in memory in case the backend is not reachable.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9335) [atsv2] Restrict the number of elements held in NM timeline collector when backend is unreachable for async calls

2019-03-18 Thread Anil Sadineni (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16795512#comment-16795512
 ] 

Anil Sadineni commented on YARN-9335:
-

[~abmodi] I observed small correction needed in yarn-default.xml file. queue 
capacity key name has 'writer' repeated twice. 
{quote}{{}}
{{ The setting that decides the capacity of the queue to hold}}
{{ asynchronous timeline entities.}}
{{ yarn.timeline-service.-writer.-writer.async.queue.capacity}}
{{ 100}}
{{}}
{quote}
 

> [atsv2] Restrict the number of elements held in NM timeline collector when 
> backend is unreachable for async calls
> -
>
> Key: YARN-9335
> URL: https://issues.apache.org/jira/browse/YARN-9335
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Vrushali C
>Assignee: Abhishek Modi
>Priority: Major
> Attachments: YARN-9335.001.patch, YARN-9335.002.patch
>
>
> For ATSv2 , if the backend is unreachable, the number/size of data held in 
> timeline collector's memory increases significantly. This is not good for the 
> NM memory. 
> Filing jira to set a limit on how many/much should be retained by the 
> timeline collector in memory in case the backend is not reachable.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9291) Backport YARN-7637 to branch-2

2019-03-18 Thread Jonathan Hung (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16795508#comment-16795508
 ] 

Jonathan Hung commented on YARN-9291:
-

Test failure was fixed as part of YARN-9397.

> Backport YARN-7637 to branch-2
> --
>
> Key: YARN-9291
> URL: https://issues.apache.org/jira/browse/YARN-9291
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
>Priority: Major
> Attachments: YARN-9291-YARN-8200.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9291) Backport YARN-7637 to branch-2

2019-03-18 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16795507#comment-16795507
 ] 

Hadoop QA commented on YARN-9291:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 13m 
12s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-8200 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
 5s{color} | {color:green} YARN-8200 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} YARN-8200 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} YARN-8200 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
37s{color} | {color:green} YARN-8200 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} YARN-8200 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 18s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:
 The patch generated 1 new + 4 unchanged - 0 fixed = 5 total (was 4) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 16m  
9s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 50m 49s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:a5f678f |
| JIRA Issue | YARN-9291 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12958084/YARN-9291-YARN-8200.001.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 7177314061fa 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | YARN-8200 / 0bac160 |
| maven | version: Apache Maven 3.3.9 
(bb52d8502b132ec0a5a3f4c09453c07478323dc5; 2015-11-10T16:41:47+00:00) |
| Default Java | 1.7.0_181 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/23743/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/23743/testReport/ |
| Max. process+thread count | 161 (vs. ulimit of 1) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/23743/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Backport YARN-7637 to branch-2
> --
>
> Key: YARN-9291
> URL: 

[jira] [Commented] (YARN-9364) Remove commons-logging dependency from remaining hadoop-yarn

2019-03-18 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16795506#comment-16795506
 ] 

Hudson commented on YARN-9364:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #16240 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16240/])
YARN-9364.  Remove commons-logging dependency from YARN. (eyang: 
rev 09eabda314fb0e5532e5391bc37fe84b883d3499)
* (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/pom.xml
* (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/pom.xml
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy/pom.xml
* (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/pom.xml
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/pom.xml
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/pom.xml
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/pom.xml
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-unmanaged-am-launcher/pom.xml
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/pom.xml
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/pom.xml
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/pom.xml
* (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-csi/pom.xml


> Remove commons-logging dependency from remaining hadoop-yarn
> 
>
> Key: YARN-9364
> URL: https://issues.apache.org/jira/browse/YARN-9364
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 3.2.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: YARN-9364-001.patch
>
>
> YARN-6712 removes the usage of commons-logging dependency. The dependency can 
> be removed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9370) Better logging in recoverAssignedGpus in class GpuResourceAllocator

2019-03-18 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16795495#comment-16795495
 ] 

Hadoop QA commented on YARN-9370:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  2s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 59s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 20m 
37s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 68m 17s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | YARN-9370 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12962884/YARN-9370.005.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux f36da03d1073 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / ae3a2c3 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/23742/testReport/ |
| Max. process+thread count | 448 (vs. ulimit of 1) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/23742/console |
| Powered by | Apache Yetus 0.8.0   

[jira] [Created] (YARN-9398) Javadoc error on FPGA related java files

2019-03-18 Thread Eric Yang (JIRA)
Eric Yang created YARN-9398:
---

 Summary: Javadoc error on FPGA related java files
 Key: YARN-9398
 URL: https://issues.apache.org/jira/browse/YARN-9398
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Eric Yang


{code}
[ERROR] 
/home/eyang/test/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/resourceplugin/fpga/AbstractFpgaVendorPlugin.java:46:
 warning: no @param for conf
[ERROR]   boolean initPlugin(Configuration conf);
[ERROR]   ^
[ERROR] 
/home/eyang/test/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/resourceplugin/fpga/AbstractFpgaVendorPlugin.java:46:
 warning: no @return
[ERROR]   boolean initPlugin(Configuration conf);
[ERROR]   ^
[ERROR] 
/home/eyang/test/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/resourceplugin/fpga/AbstractFpgaVendorPlugin.java:51:
 warning: no @param for timeout
[ERROR]   boolean diagnose(int timeout);
[ERROR]   ^
[ERROR] 
/home/eyang/test/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/resourceplugin/fpga/AbstractFpgaVendorPlugin.java:51:
 warning: no @return
[ERROR]   boolean diagnose(int timeout);
[ERROR]   ^
[ERROR] 
/home/eyang/test/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/resourceplugin/fpga/AbstractFpgaVendorPlugin.java:64:
 warning: no @return
[ERROR]   String getFpgaType();
[ERROR]  ^
[ERROR] 
/home/eyang/test/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/resourceplugin/fpga/FpgaDiscoverer.java:119:
 warning: no @return
[ERROR]   public List discover()
[ERROR] ^
[ERROR] 
/home/eyang/test/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/resourceplugin/fpga/FpgaDiscoverer.java:119:
 warning: no @throws for 
org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.resources.ResourceHandlerException
[ERROR]   public List discover()
[ERROR] ^
[ERROR] 
/home/eyang/test/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/resourceplugin/fpga/IntelFpgaOpenclPlugin.java:156:
 error: bad HTML entity
[ERROR]*  Helper class to run aocl diagnose & determine major/minor numbers.
{code}

YARN-9266 introduced some javadoc compilation errors.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9364) Remove commons-logging dependency from remaining hadoop-yarn

2019-03-18 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16795464#comment-16795464
 ] 

Hadoop QA commented on YARN-9364:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  7m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
43m 19s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m 
57s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  7m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m 
16s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 43s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  6m  
0s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
50s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
57s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
54s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 21m  
0s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
59s{color} | {color:green} hadoop-yarn-server-web-proxy in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
27s{color} | {color:green} hadoop-yarn-server-timelineservice in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 76m 
30s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m  
7s{color} | {color:green} hadoop-yarn-server-tests in the patch passed. {color} 
|
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 25m 
45s{color} | {color:green} hadoop-yarn-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 20m  
8s{color} | {color:green} hadoop-yarn-applications-distributedshell in the 
patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
1s{color} | {color:green} 

[jira] [Updated] (YARN-9370) Better logging in recoverAssignedGpus in class GpuResourceAllocator

2019-03-18 Thread Yesha Vora (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yesha Vora updated YARN-9370:
-
Attachment: YARN-9370.005.patch

> Better logging in recoverAssignedGpus in class GpuResourceAllocator
> ---
>
> Key: YARN-9370
> URL: https://issues.apache.org/jira/browse/YARN-9370
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Szilard Nemeth
>Assignee: Yesha Vora
>Priority: Trivial
>  Labels: newbie, newbie++
> Attachments: YARN-9370.001.patch, YARN-9370.002.patch, 
> YARN-9370.003.patch, YARN-9370.004.patch, YARN-9370.005.patch
>
>
> The last line of 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.resources.gpu.GpuResourceAllocator#recoverAssignedGpus
>  is this: 
> {code:java}
> usedDevices.put(gpuDevice, containerId);
> {code}
> We should have an info (or if not info, at least a debug) level log to 
> indicate that a container is allocated to a GPU device during recovery. 
> Please also check recovery related code, maybe there can be some room for 
> improvement around logging. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9397) Fix empty NMResourceInfo object test failures in branch-2

2019-03-18 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16795419#comment-16795419
 ] 

Hadoop QA commented on YARN-9397:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 18m 
33s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-8200 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
 6s{color} | {color:green} YARN-8200 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
50s{color} | {color:green} YARN-8200 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} YARN-8200 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
38s{color} | {color:green} YARN-8200 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} YARN-8200 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 15m 
35s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 55m 32s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:a5f678f |
| JIRA Issue | YARN-9397 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12962875/YARN-9397-YARN-8200.001.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 3db4c21399b8 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | YARN-8200 / d9616d6 |
| maven | version: Apache Maven 3.3.9 
(bb52d8502b132ec0a5a3f4c09453c07478323dc5; 2015-11-10T16:41:47+00:00) |
| Default Java | 1.7.0_181 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/23740/testReport/ |
| Max. process+thread count | 150 (vs. ulimit of 1) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/23740/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Fix empty NMResourceInfo object test failures in branch-2
> -
>
> Key: YARN-9397
> URL: https://issues.apache.org/jira/browse/YARN-9397
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
>Priority: Major
> Attachments: YARN-9397-YARN-8200.001.patch
>
>
> Appears 

[jira] [Commented] (YARN-9370) Better logging in recoverAssignedGpus in class GpuResourceAllocator

2019-03-18 Thread Yesha Vora (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16795388#comment-16795388
 ] 

Yesha Vora commented on YARN-9370:
--

[~snemeth] / [~eyang]: Thank you for review. New patch with slf4j is updated.

> Better logging in recoverAssignedGpus in class GpuResourceAllocator
> ---
>
> Key: YARN-9370
> URL: https://issues.apache.org/jira/browse/YARN-9370
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Szilard Nemeth
>Assignee: Yesha Vora
>Priority: Trivial
>  Labels: newbie, newbie++
> Attachments: YARN-9370.001.patch, YARN-9370.002.patch, 
> YARN-9370.003.patch, YARN-9370.004.patch
>
>
> The last line of 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.resources.gpu.GpuResourceAllocator#recoverAssignedGpus
>  is this: 
> {code:java}
> usedDevices.put(gpuDevice, containerId);
> {code}
> We should have an info (or if not info, at least a debug) level log to 
> indicate that a container is allocated to a GPU device during recovery. 
> Please also check recovery related code, maybe there can be some room for 
> improvement around logging. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9370) Better logging in recoverAssignedGpus in class GpuResourceAllocator

2019-03-18 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16795390#comment-16795390
 ] 

Hadoop QA commented on YARN-9370:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  7s{color} 
| {color:red} YARN-9370 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-9370 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12962877/YARN-9370.004.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/23741/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Better logging in recoverAssignedGpus in class GpuResourceAllocator
> ---
>
> Key: YARN-9370
> URL: https://issues.apache.org/jira/browse/YARN-9370
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Szilard Nemeth
>Assignee: Yesha Vora
>Priority: Trivial
>  Labels: newbie, newbie++
> Attachments: YARN-9370.001.patch, YARN-9370.002.patch, 
> YARN-9370.003.patch, YARN-9370.004.patch
>
>
> The last line of 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.resources.gpu.GpuResourceAllocator#recoverAssignedGpus
>  is this: 
> {code:java}
> usedDevices.put(gpuDevice, containerId);
> {code}
> We should have an info (or if not info, at least a debug) level log to 
> indicate that a container is allocated to a GPU device during recovery. 
> Please also check recovery related code, maybe there can be some room for 
> improvement around logging. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9370) Better logging in recoverAssignedGpus in class GpuResourceAllocator

2019-03-18 Thread Yesha Vora (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yesha Vora updated YARN-9370:
-
Attachment: YARN-9370.004.patch

> Better logging in recoverAssignedGpus in class GpuResourceAllocator
> ---
>
> Key: YARN-9370
> URL: https://issues.apache.org/jira/browse/YARN-9370
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Szilard Nemeth
>Assignee: Yesha Vora
>Priority: Trivial
>  Labels: newbie, newbie++
> Attachments: YARN-9370.001.patch, YARN-9370.002.patch, 
> YARN-9370.003.patch, YARN-9370.004.patch
>
>
> The last line of 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.resources.gpu.GpuResourceAllocator#recoverAssignedGpus
>  is this: 
> {code:java}
> usedDevices.put(gpuDevice, containerId);
> {code}
> We should have an info (or if not info, at least a debug) level log to 
> indicate that a container is allocated to a GPU device during recovery. 
> Please also check recovery related code, maybe there can be some room for 
> improvement around logging. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-9397) Fix empty NMResourceInfo object test failures in branch-2

2019-03-18 Thread Jonathan Hung (JIRA)
Jonathan Hung created YARN-9397:
---

 Summary: Fix empty NMResourceInfo object test failures in branch-2
 Key: YARN-9397
 URL: https://issues.apache.org/jira/browse/YARN-9397
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Jonathan Hung
Assignee: Jonathan Hung
 Attachments: YARN-9397-YARN-8200.001.patch

Appears the empty object handling behavior changed in jersey versions (branch-2 
is on jersey 1.9, branch-3 on 1.19).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9397) Fix empty NMResourceInfo object test failures in branch-2

2019-03-18 Thread Jonathan Hung (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hung updated YARN-9397:

Attachment: YARN-9397-YARN-8200.001.patch

> Fix empty NMResourceInfo object test failures in branch-2
> -
>
> Key: YARN-9397
> URL: https://issues.apache.org/jira/browse/YARN-9397
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
>Priority: Major
> Attachments: YARN-9397-YARN-8200.001.patch
>
>
> Appears the empty object handling behavior changed in jersey versions 
> (branch-2 is on jersey 1.9, branch-3 on 1.19).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9396) YARN_RM_CONTAINER_CREATED published twice to ATS

2019-03-18 Thread Prabhu Joseph (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16795276#comment-16795276
 ] 

Prabhu Joseph commented on YARN-9396:
-

[~rohithsharma] [~eyang] Can you review this jira when you get time.

> YARN_RM_CONTAINER_CREATED published twice to ATS
> 
>
> Key: YARN-9396
> URL: https://issues.apache.org/jira/browse/YARN-9396
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: ATSv2
>Affects Versions: 3.2.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: YARN-9396-001.patch
>
>
> RM Container Created event published twice - one from 
> {{ContainerStartedTransition}} (NEW -> ALLOCATED) and another from 
> {{AcquiredTransition}} (ALLOCATED -> ACQUIRED)
> {code}
> 2019-03-18 13:10:13,551 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: 
> container_e11_1552914589043_0001_01_01 Container Transitioned from NEW to 
> ALLOCATED
> 2019-03-18 13:10:13,597 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: 
> container_e11_1552914589043_0001_01_01 Container Transitioned from 
> ALLOCATED to ACQUIRED
> {code}
> *Duplicate Events:*
> {code}
> container_e11_1552914589043_0001_01_01 start:
> 2019-03-18 13:10:13,556 DEBUG 
> org.apache.hadoop.yarn.server.resourcemanager.metrics.TimelineServiceV2Publisher:
>  Publishing the entity TimelineEntity[type='YARN_CONTAINER', 
> id='container_e11_1552914589043_0001_01_01'], JSON-style content: 
> {"metrics":[],"events":[{"id":"YARN_RM_CONTAINER_CREATED","timestamp":1552914613542,"info":{}}],"createdtime":1552914613542,"idprefix":0,"info":{"YARN_CONTAINER_ALLOCATED_PORT":45454,"YARN_CONTAINER_ALLOCATED_MEMORY":2048,"YARN_CONTAINER_ALLOCATED_PRIORITY":0,"YARN_CONTAINER_ALLOCATED_HOST":"yarn-ats-2","YARN_CONTAINER_ALLOCATED_HOST_HTTP_ADDRESS":"http://yarn-ats-2:8042","YARN_CONTAINER_ALLOCATED_VCORE":1},"configs":{},"isrelatedto":{},"relatesto":{},"type":"YARN_CONTAINER","id":"container_e11_1552914589043_0001_01_01"}
> 2019-03-18 13:10:13,598 DEBUG 
> org.apache.hadoop.yarn.server.resourcemanager.metrics.TimelineServiceV2Publisher:
>  Publishing the entity TimelineEntity[type='YARN_CONTAINER', 
> id='container_e11_1552914589043_0001_01_01'], JSON-style content: 
> {"metrics":[],"events":[{"id":"YARN_RM_CONTAINER_CREATED","timestamp":1552914613542,"info":{}}],"createdtime":1552914613542,"idprefix":0,"info":{"YARN_CONTAINER_ALLOCATED_PORT":45454,"YARN_CONTAINER_ALLOCATED_MEMORY":2048,"YARN_CONTAINER_ALLOCATED_PRIORITY":0,"YARN_CONTAINER_ALLOCATED_HOST":"yarn-ats-2","YARN_CONTAINER_ALLOCATED_HOST_HTTP_ADDRESS":"http://yarn-ats-2:8042","YARN_CONTAINER_ALLOCATED_VCORE":1},"configs":{},"isrelatedto":{},"relatesto":{},"type":"YARN_CONTAINER","id":"container_e11_1552914589043_0001_01_01"}
> container_e11_1552914589043_0001_01_02 start:
> 2019-03-18 13:10:21,599 DEBUG 
> org.apache.hadoop.yarn.server.resourcemanager.metrics.TimelineServiceV2Publisher:
>  Publishing the entity TimelineEntity[type='YARN_CONTAINER', 
> id='container_e11_1552914589043_0001_01_02'], JSON-style content: 
> {"metrics":[],"events":[{"id":"YARN_RM_CONTAINER_CREATED","timestamp":1552914621596,"info":{}}],"createdtime":1552914621596,"idprefix":0,"info":{"YARN_CONTAINER_ALLOCATED_PORT":45454,"YARN_CONTAINER_ALLOCATED_MEMORY":1024,"YARN_CONTAINER_ALLOCATED_PRIORITY":20,"YARN_CONTAINER_ALLOCATED_HOST":"yarn-ats-2","YARN_CONTAINER_ALLOCATED_HOST_HTTP_ADDRESS":"http://yarn-ats-2:8042","YARN_CONTAINER_ALLOCATED_VCORE":1},"configs":{},"isrelatedto":{},"relatesto":{},"type":"YARN_CONTAINER","id":"container_e11_1552914589043_0001_01_02"}
> 2019-03-18 13:10:22,344 DEBUG 
> org.apache.hadoop.yarn.server.resourcemanager.metrics.TimelineServiceV2Publisher:
>  Publishing the entity TimelineEntity[type='YARN_CONTAINER', 
> id='container_e11_1552914589043_0001_01_02'], JSON-style content: 
> {"metrics":[],"events":[{"id":"YARN_RM_CONTAINER_CREATED","timestamp":1552914621596,"info":{}}],"createdtime":1552914621596,"idprefix":0,"info":{"YARN_CONTAINER_ALLOCATED_PORT":45454,"YARN_CONTAINER_ALLOCATED_MEMORY":1024,"YARN_CONTAINER_ALLOCATED_PRIORITY":20,"YARN_CONTAINER_ALLOCATED_HOST":"yarn-ats-2","YARN_CONTAINER_ALLOCATED_HOST_HTTP_ADDRESS":"http://yarn-ats-2:8042","YARN_CONTAINER_ALLOCATED_VCORE":1},"configs":{},"isrelatedto":{},"relatesto":{},"type":"YARN_CONTAINER","id":"container_e11_1552914589043_0001_01_02"}
> container_e11_1552914589043_0001_01_03 start:
> 2019-03-18 13:10:27,918 DEBUG 
> org.apache.hadoop.yarn.server.resourcemanager.metrics.TimelineServiceV2Publisher:
>  Publishing the entity TimelineEntity[type='YARN_CONTAINER', 
> id='container_e11_1552914589043_0001_01_03'], JSON-style content: 
> 

[jira] [Updated] (YARN-9364) Remove commons-logging dependency from remaining hadoop-yarn

2019-03-18 Thread Prabhu Joseph (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph updated YARN-9364:

Attachment: YARN-9364-001.patch

> Remove commons-logging dependency from remaining hadoop-yarn
> 
>
> Key: YARN-9364
> URL: https://issues.apache.org/jira/browse/YARN-9364
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 3.2.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: YARN-9364-001.patch
>
>
> YARN-6712 removes the usage of commons-logging dependency. The dependency can 
> be removed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9363) Replace isDebugEnabled with SLF4J parameterized log messages for remaining code

2019-03-18 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16795268#comment-16795268
 ] 

Hudson commented on YARN-9363:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #16236 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16236/])
YARN-9363.  Replaced debug logging with SLF4J parameterized log message. 
(eyang: rev 5f6e22516668ff94a76737ad5e2cdcb2ff9f6dfd)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/DelegationTokenRenewer.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/fpga/FpgaResourceAllocator.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/queuemanagement/GuaranteedOrZeroCapacityOverTimePolicy.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/AbstractAutoCreatedLeafQueue.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/resourceplugin/deviceframework/DeviceMappingManager.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/numa/NumaNodeResource.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/api/impl/FileSystemTimelineWriter.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/resourceplugin/deviceframework/DevicePluginAdapter.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fifo/FifoScheduler.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/numa/NumaResourceHandlerImpl.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/resourceplugin/deviceframework/DeviceResourceDockerRuntimePluginImpl.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/TestNodeManagerMXBean.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-csi/src/main/java/org/apache/hadoop/yarn/csi/client/CsiGrpcClient.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/resourceplugin/deviceframework/DeviceResourceHandlerImpl.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/launcher/RecoverPausedContainerLaunch.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/TestCGroupElasticMemoryController.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/resourceplugin/fpga/FpgaResourcePlugin.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/numa/NumaResourceAllocator.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/CGroupsHandlerImpl.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/PlacementConstraintsUtil.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/QueueManagementDynamicEditPolicy.java
* (edit) 

[jira] [Commented] (YARN-9363) Replace isDebugEnabled with SLF4J parameterized log messages for remaining code

2019-03-18 Thread Prabhu Joseph (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16795262#comment-16795262
 ] 

Prabhu Joseph commented on YARN-9363:
-

Thanks [~eyang]!

> Replace isDebugEnabled with SLF4J parameterized log messages for remaining 
> code
> ---
>
> Key: YARN-9363
> URL: https://issues.apache.org/jira/browse/YARN-9363
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Affects Versions: 3.2.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: YARN-9363-001.patch, YARN-9363-002.patch, 
> YARN-9363-003.patch, YARN-9363-004.patch
>
>
> Follow up of YARN-9343 to address below review comments
> There are still 200+ LOG.isDebugEnabled() calls in the code. two things:
> There are a lot of simple one parameter calls which could easily be converted 
> to unguarded calls, examples:
> NvidiaDockerV1CommandPlugin.java
> FSParentQueue.java
> Application.java
> Some of the calls to LOG.debug that are guarded inside those guards have not 
> been changed to parameterized calls yet. 
> cc [~wilfreds]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9363) Replace isDebugEnabled with SLF4J parameterized log messages for remaining code

2019-03-18 Thread Eric Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9363?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated YARN-9363:

Fix Version/s: 3.3.0

> Replace isDebugEnabled with SLF4J parameterized log messages for remaining 
> code
> ---
>
> Key: YARN-9363
> URL: https://issues.apache.org/jira/browse/YARN-9363
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Affects Versions: 3.2.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: YARN-9363-001.patch, YARN-9363-002.patch, 
> YARN-9363-003.patch, YARN-9363-004.patch
>
>
> Follow up of YARN-9343 to address below review comments
> There are still 200+ LOG.isDebugEnabled() calls in the code. two things:
> There are a lot of simple one parameter calls which could easily be converted 
> to unguarded calls, examples:
> NvidiaDockerV1CommandPlugin.java
> FSParentQueue.java
> Application.java
> Some of the calls to LOG.debug that are guarded inside those guards have not 
> been changed to parameterized calls yet. 
> cc [~wilfreds]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9385) YARN Services with simple authentication doesn't respect current UGI

2019-03-18 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9385?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16795244#comment-16795244
 ] 

Hudson commented on YARN-9385:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #16235 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16235/])
YARN-9385.  Fixed ApiServiceClient to use current UGI. (eyang: rev 
19b22c4385a8cf0f89a2ad939380cfd3f033ffdc)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-api/src/test/java/org/apache/hadoop/yarn/service/client/TestApiServiceClient.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-api/src/main/java/org/apache/hadoop/yarn/service/client/ApiServiceClient.java


> YARN Services with simple authentication doesn't respect current UGI
> 
>
> Key: YARN-9385
> URL: https://issues.apache.org/jira/browse/YARN-9385
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: security, yarn-native-services
>Reporter: Todd Lipcon
>Assignee: Eric Yang
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: YARN-9385.001.patch, YARN-9385.002.patch, 
> YARN-9385.003.patch, YARN-9385.004.patch, YARN-9385.005.patch
>
>
> The ApiServiceClient implementation appends the current username to the 
> request URL for "simple" authentication. However, that username is derived 
> from the 'user.name' system property instead of the current UGI. That means 
> that username spoofing via the 'HADOOP_USER_NAME' variable doesn't take 
> effect for HTTP-based calls in the same manner that it does for RPC-based 
> calls.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8967) Change FairScheduler to use PlacementRule interface

2019-03-18 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16795184#comment-16795184
 ] 

Hadoop QA commented on YARN-8967:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 17 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 47s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
42s{color} | {color:green} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager
 generated 0 new + 16 unchanged - 1 fixed = 16 total (was 17) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 38s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 6 new + 339 unchanged - 67 fixed = 345 total (was 406) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 34s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 86m  1s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
34s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}138m 39s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.fair.TestAllocationFileLoaderService
 |
|   | 
hadoop.yarn.server.resourcemanager.scheduler.fair.TestQueuePlacementPolicy |
|   | hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairScheduler |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | YARN-8967 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12962819/YARN-8967.008.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux e4888e25cc8a 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 2db38ab |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
| checkstyle 

[jira] [Commented] (YARN-9358) Add javadoc to new methods introduced in FSQueueMetrics with YARN-9322

2019-03-18 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16795180#comment-16795180
 ] 

Hadoop QA commented on YARN-9358:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 54s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 29s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 1 new + 21 unchanged - 0 fixed = 22 total (was 21) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 59s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 78m 
37s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}124m 21s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | YARN-9358 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12962822/YARN-9358.002.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 52457485f830 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 2db38ab |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/23737/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/23737/testReport/ |
| Max. process+thread count | 

[jira] [Commented] (YARN-9396) YARN_RM_CONTAINER_CREATED published twice to ATS

2019-03-18 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16795169#comment-16795169
 ] 

Hadoop QA commented on YARN-9396:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
 9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 55s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 51s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 77m 
33s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}123m 39s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | YARN-9396 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12962820/YARN-9396-001.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 1880fe5667b2 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 2db38ab |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/23736/testReport/ |
| Max. process+thread count | 901 (vs. ulimit of 1) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/23736/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> YARN_RM_CONTAINER_CREATED published twice to ATS

[jira] [Commented] (YARN-9267) General improvements in FpgaResourceHandlerImpl

2019-03-18 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9267?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16795132#comment-16795132
 ] 

Hadoop QA commented on YARN-9267:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 24s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:
 The patch generated 0 new + 46 unchanged - 6 fixed = 46 total (was 52) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 45s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 20m 
49s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 73m 28s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | YARN-9267 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12962824/YARN-9267-006.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux eca27febac90 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 2db38ab |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/23738/testReport/ |
| Max. process+thread count | 307 (vs. ulimit of 1) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/23738/console |
| Powered by | Apache 

[jira] [Commented] (YARN-9358) Add javadoc to new methods introduced in FSQueueMetrics with YARN-9322

2019-03-18 Thread Adam Antal (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16795095#comment-16795095
 ] 

Adam Antal commented on YARN-9358:
--

Thanks [~zsiegl], {{setAMResourceUsage}} still has the bad javadoc but other 
than that it's good.

> Add javadoc to new methods introduced in FSQueueMetrics with YARN-9322
> --
>
> Key: YARN-9358
> URL: https://issues.apache.org/jira/browse/YARN-9358
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Szilard Nemeth
>Assignee: Zoltan Siegl
>Priority: Major
> Attachments: YARN-9358.001.patch, YARN-9358.002.patch
>
>
> This is a follow-up for YARN-9322, covering javadoc changes as discussed with 
> [~templedf] earlier.
> As discussed with Daniel, we need to add javadoc for the new methods 
> introduced with YARN-9322 and also for the modified methods. 
> The javadoc should refer to the fact that Resource Types are also included in 
> the Resource object in case of get/set as well.
> The methods are: 
> 1. getFairShare / setFairShare
> 2. getSteadyFairShare / setSteadyFairShare
> 3. getMinShare / setMinShare
> 4. getMaxShare / setMaxShare
> 5. getMaxAMShare / setMaxAMShare
> 6. getAMResourceUsage / setAMResourceUsage
> Moreover, a javadoc could be added to the constructor of FSQueueMetrics as 
> well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9267) General improvements in FpgaResourceHandlerImpl

2019-03-18 Thread Peter Bacsko (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9267?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Bacsko updated YARN-9267:
---
Attachment: YARN-9267-006.patch

> General improvements in FpgaResourceHandlerImpl
> ---
>
> Key: YARN-9267
> URL: https://issues.apache.org/jira/browse/YARN-9267
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Peter Bacsko
>Assignee: Peter Bacsko
>Priority: Major
> Attachments: YARN-9267-001.patch, YARN-9267-002.patch, 
> YARN-9267-003.patch, YARN-9267-004.patch, YARN-9267-005.patch, 
> YARN-9267-006.patch
>
>
> Fix some problems in {{FpgaResourceHandlerImpl}}:
>  * {{preStart()}} does not reconfigure card with the same IP - we see it as a 
> problem. If you recompile the FPGA application, you must rename the aocx file 
> because the card will not be reprogrammed. Suggestion: instead of storing 
> Node<\->IPID mapping, store Node<\->IPID hash (like the SHA-256 of the 
> localized file).
>  * Switch to slf4j from Apache Commons Logging
>  * Some unused imports



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-9267) General improvements in FpgaResourceHandlerImpl

2019-03-18 Thread Peter Bacsko (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9267?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16795068#comment-16795068
 ] 

Peter Bacsko edited comment on YARN-9267 at 3/18/19 2:39 PM:
-

[~devaraj.k] please check patch v6 which should be free of checkstyle issues.

Just to be safe, I added two SHA-256 related tests to 
{{TestFpgaResourceHandler}}.


was (Author: pbacsko):
[~devaraj.k] please check patch v6 which should be free of checkstyle issues.

> General improvements in FpgaResourceHandlerImpl
> ---
>
> Key: YARN-9267
> URL: https://issues.apache.org/jira/browse/YARN-9267
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Peter Bacsko
>Assignee: Peter Bacsko
>Priority: Major
> Attachments: YARN-9267-001.patch, YARN-9267-002.patch, 
> YARN-9267-003.patch, YARN-9267-004.patch, YARN-9267-005.patch, 
> YARN-9267-006.patch
>
>
> Fix some problems in {{FpgaResourceHandlerImpl}}:
>  * {{preStart()}} does not reconfigure card with the same IP - we see it as a 
> problem. If you recompile the FPGA application, you must rename the aocx file 
> because the card will not be reprogrammed. Suggestion: instead of storing 
> Node<\->IPID mapping, store Node<\->IPID hash (like the SHA-256 of the 
> localized file).
>  * Switch to slf4j from Apache Commons Logging
>  * Some unused imports



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9267) General improvements in FpgaResourceHandlerImpl

2019-03-18 Thread Peter Bacsko (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9267?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16795068#comment-16795068
 ] 

Peter Bacsko commented on YARN-9267:


[~devaraj.k] please check patch v6 which should be free of checkstyle issues.

> General improvements in FpgaResourceHandlerImpl
> ---
>
> Key: YARN-9267
> URL: https://issues.apache.org/jira/browse/YARN-9267
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Peter Bacsko
>Assignee: Peter Bacsko
>Priority: Major
> Attachments: YARN-9267-001.patch, YARN-9267-002.patch, 
> YARN-9267-003.patch, YARN-9267-004.patch, YARN-9267-005.patch, 
> YARN-9267-006.patch
>
>
> Fix some problems in {{FpgaResourceHandlerImpl}}:
>  * {{preStart()}} does not reconfigure card with the same IP - we see it as a 
> problem. If you recompile the FPGA application, you must rename the aocx file 
> because the card will not be reprogrammed. Suggestion: instead of storing 
> Node<\->IPID mapping, store Node<\->IPID hash (like the SHA-256 of the 
> localized file).
>  * Switch to slf4j from Apache Commons Logging
>  * Some unused imports



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9267) General improvements in FpgaResourceHandlerImpl

2019-03-18 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9267?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16795057#comment-16795057
 ] 

Hadoop QA commented on YARN-9267:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 12s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 24s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:
 The patch generated 6 new + 47 unchanged - 5 fixed = 53 total (was 52) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  3s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 21m 
19s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 69m 33s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | YARN-9267 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12962811/YARN-9267-005.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 1d931e9b4400 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / cb4d911 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/23734/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/23734/testReport/ |
| Max. process+thread count | 423 (vs. ulimit of 1) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 

[jira] [Updated] (YARN-9358) Add javadoc to new methods introduced in FSQueueMetrics with YARN-9322

2019-03-18 Thread Zoltan Siegl (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltan Siegl updated YARN-9358:
---
Attachment: YARN-9358.002.patch

> Add javadoc to new methods introduced in FSQueueMetrics with YARN-9322
> --
>
> Key: YARN-9358
> URL: https://issues.apache.org/jira/browse/YARN-9358
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Szilard Nemeth
>Assignee: Zoltan Siegl
>Priority: Major
> Attachments: YARN-9358.001.patch, YARN-9358.002.patch
>
>
> This is a follow-up for YARN-9322, covering javadoc changes as discussed with 
> [~templedf] earlier.
> As discussed with Daniel, we need to add javadoc for the new methods 
> introduced with YARN-9322 and also for the modified methods. 
> The javadoc should refer to the fact that Resource Types are also included in 
> the Resource object in case of get/set as well.
> The methods are: 
> 1. getFairShare / setFairShare
> 2. getSteadyFairShare / setSteadyFairShare
> 3. getMinShare / setMinShare
> 4. getMaxShare / setMaxShare
> 5. getMaxAMShare / setMaxAMShare
> 6. getAMResourceUsage / setAMResourceUsage
> Moreover, a javadoc could be added to the constructor of FSQueueMetrics as 
> well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9358) Add javadoc to new methods introduced in FSQueueMetrics with YARN-9322

2019-03-18 Thread Zoltan Siegl (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltan Siegl updated YARN-9358:
---
Attachment: YARN-9358_002.patch

> Add javadoc to new methods introduced in FSQueueMetrics with YARN-9322
> --
>
> Key: YARN-9358
> URL: https://issues.apache.org/jira/browse/YARN-9358
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Szilard Nemeth
>Assignee: Zoltan Siegl
>Priority: Major
> Attachments: YARN-9358.001.patch, YARN-9358.002.patch
>
>
> This is a follow-up for YARN-9322, covering javadoc changes as discussed with 
> [~templedf] earlier.
> As discussed with Daniel, we need to add javadoc for the new methods 
> introduced with YARN-9322 and also for the modified methods. 
> The javadoc should refer to the fact that Resource Types are also included in 
> the Resource object in case of get/set as well.
> The methods are: 
> 1. getFairShare / setFairShare
> 2. getSteadyFairShare / setSteadyFairShare
> 3. getMinShare / setMinShare
> 4. getMaxShare / setMaxShare
> 5. getMaxAMShare / setMaxAMShare
> 6. getAMResourceUsage / setAMResourceUsage
> Moreover, a javadoc could be added to the constructor of FSQueueMetrics as 
> well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9358) Add javadoc to new methods introduced in FSQueueMetrics with YARN-9322

2019-03-18 Thread Zoltan Siegl (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16795062#comment-16795062
 ] 

Zoltan Siegl commented on YARN-9358:


[~adam.antal] thank you for the comment, fixed that and uploaded new patch.

> Add javadoc to new methods introduced in FSQueueMetrics with YARN-9322
> --
>
> Key: YARN-9358
> URL: https://issues.apache.org/jira/browse/YARN-9358
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Szilard Nemeth
>Assignee: Zoltan Siegl
>Priority: Major
> Attachments: YARN-9358.001.patch, YARN-9358.002.patch
>
>
> This is a follow-up for YARN-9322, covering javadoc changes as discussed with 
> [~templedf] earlier.
> As discussed with Daniel, we need to add javadoc for the new methods 
> introduced with YARN-9322 and also for the modified methods. 
> The javadoc should refer to the fact that Resource Types are also included in 
> the Resource object in case of get/set as well.
> The methods are: 
> 1. getFairShare / setFairShare
> 2. getSteadyFairShare / setSteadyFairShare
> 3. getMinShare / setMinShare
> 4. getMaxShare / setMaxShare
> 5. getMaxAMShare / setMaxAMShare
> 6. getAMResourceUsage / setAMResourceUsage
> Moreover, a javadoc could be added to the constructor of FSQueueMetrics as 
> well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9358) Add javadoc to new methods introduced in FSQueueMetrics with YARN-9322

2019-03-18 Thread Zoltan Siegl (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltan Siegl updated YARN-9358:
---
Attachment: (was: YARN-9358_002.patch)

> Add javadoc to new methods introduced in FSQueueMetrics with YARN-9322
> --
>
> Key: YARN-9358
> URL: https://issues.apache.org/jira/browse/YARN-9358
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Szilard Nemeth
>Assignee: Zoltan Siegl
>Priority: Major
> Attachments: YARN-9358.001.patch, YARN-9358.002.patch
>
>
> This is a follow-up for YARN-9322, covering javadoc changes as discussed with 
> [~templedf] earlier.
> As discussed with Daniel, we need to add javadoc for the new methods 
> introduced with YARN-9322 and also for the modified methods. 
> The javadoc should refer to the fact that Resource Types are also included in 
> the Resource object in case of get/set as well.
> The methods are: 
> 1. getFairShare / setFairShare
> 2. getSteadyFairShare / setSteadyFairShare
> 3. getMinShare / setMinShare
> 4. getMaxShare / setMaxShare
> 5. getMaxAMShare / setMaxAMShare
> 6. getAMResourceUsage / setAMResourceUsage
> Moreover, a javadoc could be added to the constructor of FSQueueMetrics as 
> well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9340) [Clean-up] Remove NULL check before instanceof in ResourceRequestSetKey

2019-03-18 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16795061#comment-16795061
 ] 

Hudson commented on YARN-9340:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #16230 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16230/])
YARN-9340. [Clean-up] Remove NULL check before instanceof in (templedf: rev 
0e7e9013d4a0785ae22a5a569c3977aeaf7e1900)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/scheduler/ResourceRequestSetKey.java


> [Clean-up] Remove NULL check before instanceof in ResourceRequestSetKey
> ---
>
> Key: YARN-9340
> URL: https://issues.apache.org/jira/browse/YARN-9340
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: yarn
>Reporter: Shweta
>Assignee: Shweta
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: YARN-9340.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8967) Change FairScheduler to use PlacementRule interface

2019-03-18 Thread Wilfred Spiegelenburg (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16795049#comment-16795049
 ] 

Wilfred Spiegelenburg commented on YARN-8967:
-

Thank you for the review [~yufeigu]

1) yes it did clean up nicely
2) The class is marked as {{@ Unstable}} that should cover the change. Leaving 
the old constructors in could allow you to create a new 
{{AllocationFileLoaderService}} without a scheduler reference. That would cause 
a NPE on scheduler init and every single time the reload thread would run, 
leaving the RM in a failed state. I don't think it would be wise to leave them 
in. 
Based on all this I do think I need to file a follow up jira to fix the Hive 
SHIM that uses the policy at the moment and move that to the new code in a 
backward compatible way.
3) fixed that
4) fixed that
5) The difference between recovery and normal is just two if statements: in the 
first we ignore and empty context on recovery and the second one is to not 
generate an event on recovery. Moving the code out would not help. The checks 
are on opposite sides of the method and simple.
6) We could still have an empty queue that was why I left it. I just noticed 
that that case would be caught by the {{getLeafQueue}} so we should be OK with 
removing.
7) fixed that, it should have been removed

1) I have chosen to use the utility class solution and clean up a bit more. 
Keeping the QueuePlacementPolicy around in the allocation does not really help 
as the rules are really only relevant in the QueuePlacementManager in the new 
setup. There is no logic beside the rule list which is not 1:1 with the config 
that we could keep around.
2) fixed the reference (I used javadoc as there was nothing for other comments, 
now it is just a plain comment)
3) removed the comment and code
4) fixed
5) the tests look really similar but they are not. They test slight variations: 
the first two checks make sure the specified rule and create user rule trigger 
correctly. The last two make sure that the specified rule triggers but not the 
user rule and that the default rule does the catch it correctly.
6) fixed that, I left it at first with a view on possible extension later with 
other bits. I now moved the parent create code out and left the loop for 
elements which clears things up.
7) added a RuleMap class based on the suggestion
8) I think it is better to file a follow up jira as the same has happened in 
all new rule classes. We must have overlooked them in the previous jira when we 
did the cleanup. I checked and the exception is logged in the client service so 
it can be done.



> Change FairScheduler to use PlacementRule interface
> ---
>
> Key: YARN-8967
> URL: https://issues.apache.org/jira/browse/YARN-8967
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacityscheduler, fairscheduler
>Reporter: Wilfred Spiegelenburg
>Assignee: Wilfred Spiegelenburg
>Priority: Major
> Attachments: YARN-8967.001.patch, YARN-8967.002.patch, 
> YARN-8967.003.patch, YARN-8967.004.patch, YARN-8967.005.patch, 
> YARN-8967.006.patch, YARN-8967.007.patch
>
>
> The PlacementRule interface was introduced to be used by all schedulers as 
> per YARN-3635. The CapacityScheduler is using it but the FairScheduler is not 
> and is using its own rule definition.
> YARN-8948 cleans up the implementation and removes the CS references which 
> should allow this change to go through.
> This would be the first step in using one placement rule engine for both 
> schedulers.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9396) YARN_RM_CONTAINER_CREATED published twice to ATS

2019-03-18 Thread Prabhu Joseph (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9396?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph updated YARN-9396:

Attachment: YARN-9396-001.patch

> YARN_RM_CONTAINER_CREATED published twice to ATS
> 
>
> Key: YARN-9396
> URL: https://issues.apache.org/jira/browse/YARN-9396
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: ATSv2
>Affects Versions: 3.2.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: YARN-9396-001.patch
>
>
> RM Container Created event published twice - one from 
> {{ContainerStartedTransition}} (NEW -> ALLOCATED) and another from 
> {{AcquiredTransition}} (ALLOCATED -> ACQUIRED)
> {code}
> 2019-03-18 13:10:13,551 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: 
> container_e11_1552914589043_0001_01_01 Container Transitioned from NEW to 
> ALLOCATED
> 2019-03-18 13:10:13,597 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: 
> container_e11_1552914589043_0001_01_01 Container Transitioned from 
> ALLOCATED to ACQUIRED
> {code}
> *Duplicate Events:*
> {code}
> container_e11_1552914589043_0001_01_01 start:
> 2019-03-18 13:10:13,556 DEBUG 
> org.apache.hadoop.yarn.server.resourcemanager.metrics.TimelineServiceV2Publisher:
>  Publishing the entity TimelineEntity[type='YARN_CONTAINER', 
> id='container_e11_1552914589043_0001_01_01'], JSON-style content: 
> {"metrics":[],"events":[{"id":"YARN_RM_CONTAINER_CREATED","timestamp":1552914613542,"info":{}}],"createdtime":1552914613542,"idprefix":0,"info":{"YARN_CONTAINER_ALLOCATED_PORT":45454,"YARN_CONTAINER_ALLOCATED_MEMORY":2048,"YARN_CONTAINER_ALLOCATED_PRIORITY":0,"YARN_CONTAINER_ALLOCATED_HOST":"yarn-ats-2","YARN_CONTAINER_ALLOCATED_HOST_HTTP_ADDRESS":"http://yarn-ats-2:8042","YARN_CONTAINER_ALLOCATED_VCORE":1},"configs":{},"isrelatedto":{},"relatesto":{},"type":"YARN_CONTAINER","id":"container_e11_1552914589043_0001_01_01"}
> 2019-03-18 13:10:13,598 DEBUG 
> org.apache.hadoop.yarn.server.resourcemanager.metrics.TimelineServiceV2Publisher:
>  Publishing the entity TimelineEntity[type='YARN_CONTAINER', 
> id='container_e11_1552914589043_0001_01_01'], JSON-style content: 
> {"metrics":[],"events":[{"id":"YARN_RM_CONTAINER_CREATED","timestamp":1552914613542,"info":{}}],"createdtime":1552914613542,"idprefix":0,"info":{"YARN_CONTAINER_ALLOCATED_PORT":45454,"YARN_CONTAINER_ALLOCATED_MEMORY":2048,"YARN_CONTAINER_ALLOCATED_PRIORITY":0,"YARN_CONTAINER_ALLOCATED_HOST":"yarn-ats-2","YARN_CONTAINER_ALLOCATED_HOST_HTTP_ADDRESS":"http://yarn-ats-2:8042","YARN_CONTAINER_ALLOCATED_VCORE":1},"configs":{},"isrelatedto":{},"relatesto":{},"type":"YARN_CONTAINER","id":"container_e11_1552914589043_0001_01_01"}
> container_e11_1552914589043_0001_01_02 start:
> 2019-03-18 13:10:21,599 DEBUG 
> org.apache.hadoop.yarn.server.resourcemanager.metrics.TimelineServiceV2Publisher:
>  Publishing the entity TimelineEntity[type='YARN_CONTAINER', 
> id='container_e11_1552914589043_0001_01_02'], JSON-style content: 
> {"metrics":[],"events":[{"id":"YARN_RM_CONTAINER_CREATED","timestamp":1552914621596,"info":{}}],"createdtime":1552914621596,"idprefix":0,"info":{"YARN_CONTAINER_ALLOCATED_PORT":45454,"YARN_CONTAINER_ALLOCATED_MEMORY":1024,"YARN_CONTAINER_ALLOCATED_PRIORITY":20,"YARN_CONTAINER_ALLOCATED_HOST":"yarn-ats-2","YARN_CONTAINER_ALLOCATED_HOST_HTTP_ADDRESS":"http://yarn-ats-2:8042","YARN_CONTAINER_ALLOCATED_VCORE":1},"configs":{},"isrelatedto":{},"relatesto":{},"type":"YARN_CONTAINER","id":"container_e11_1552914589043_0001_01_02"}
> 2019-03-18 13:10:22,344 DEBUG 
> org.apache.hadoop.yarn.server.resourcemanager.metrics.TimelineServiceV2Publisher:
>  Publishing the entity TimelineEntity[type='YARN_CONTAINER', 
> id='container_e11_1552914589043_0001_01_02'], JSON-style content: 
> {"metrics":[],"events":[{"id":"YARN_RM_CONTAINER_CREATED","timestamp":1552914621596,"info":{}}],"createdtime":1552914621596,"idprefix":0,"info":{"YARN_CONTAINER_ALLOCATED_PORT":45454,"YARN_CONTAINER_ALLOCATED_MEMORY":1024,"YARN_CONTAINER_ALLOCATED_PRIORITY":20,"YARN_CONTAINER_ALLOCATED_HOST":"yarn-ats-2","YARN_CONTAINER_ALLOCATED_HOST_HTTP_ADDRESS":"http://yarn-ats-2:8042","YARN_CONTAINER_ALLOCATED_VCORE":1},"configs":{},"isrelatedto":{},"relatesto":{},"type":"YARN_CONTAINER","id":"container_e11_1552914589043_0001_01_02"}
> container_e11_1552914589043_0001_01_03 start:
> 2019-03-18 13:10:27,918 DEBUG 
> org.apache.hadoop.yarn.server.resourcemanager.metrics.TimelineServiceV2Publisher:
>  Publishing the entity TimelineEntity[type='YARN_CONTAINER', 
> id='container_e11_1552914589043_0001_01_03'], JSON-style content: 
> 

[jira] [Comment Edited] (YARN-8967) Change FairScheduler to use PlacementRule interface

2019-03-18 Thread Wilfred Spiegelenburg (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16795049#comment-16795049
 ] 

Wilfred Spiegelenburg edited comment on YARN-8967 at 3/18/19 2:15 PM:
--

Thank you for the review [~yufeigu]

AllocationFileLoaderService file:
1) yes it did clean up nicely
2) The class is marked as {{@Unstable}} that should cover the change. Leaving 
the old constructors in could allow you to create a new 
{{AllocationFileLoaderService}} without a scheduler reference. That would cause 
a NPE on scheduler init and every single time the reload thread would run, 
leaving the RM in a failed state. I don't think it would be wise to leave them 
in.  _Based on all this I do think I need to file a follow up jira to fix the 
Hive SHIM that uses the policy at the moment and move that to the new code in a 
backward compatible way._
3) fixed that
4) fixed that
5) The difference between recovery and normal is just two if statements: in the 
first we ignore and empty context on recovery and the second one is to not 
generate an event on recovery. Moving the code out would not help. The checks 
are on opposite sides of the method and simple.
6) We could still have an empty queue that was why I left it. I just noticed 
that that case would be caught by the {{getLeafQueue}} so we should be OK with 
removing.
7) fixed that, it should have been removed

QueuePlacementPolicy file:
1) I have chosen to use the utility class solution and clean up a bit more. 
Keeping the QueuePlacementPolicy around in the allocation does not really help 
as the rules are really only relevant in the QueuePlacementManager in the new 
setup. There is no logic beside the rule list which is not 1:1 with the config 
that we could keep around.
2) fixed the reference (I used javadoc as there was nothing for other comments, 
now it is just a plain comment)
3) removed the comment and code
4) fixed
5) the tests look really similar but they are not. They test slight variations: 
the first two checks make sure the specified rule and create user rule trigger 
correctly. The last two make sure that the specified rule triggers but not the 
user rule and that the default rule does the catch it correctly.
6) fixed that, I left it at first with a view on possible extension later with 
other bits. I now moved the parent create code out and left the loop for 
elements which clears things up.
7) added a RuleMap class based on the suggestion
8) I think it is better to file a follow up jira as the same has happened in 
all new rule classes. We must have overlooked them in the previous jira when we 
did the cleanup. I checked and the exception is logged in the client service so 
it can be done.




was (Author: wilfreds):
Thank you for the review [~yufeigu]

1) yes it did clean up nicely
2) The class is marked as {{@ Unstable}} that should cover the change. Leaving 
the old constructors in could allow you to create a new 
{{AllocationFileLoaderService}} without a scheduler reference. That would cause 
a NPE on scheduler init and every single time the reload thread would run, 
leaving the RM in a failed state. I don't think it would be wise to leave them 
in. 
Based on all this I do think I need to file a follow up jira to fix the Hive 
SHIM that uses the policy at the moment and move that to the new code in a 
backward compatible way.
3) fixed that
4) fixed that
5) The difference between recovery and normal is just two if statements: in the 
first we ignore and empty context on recovery and the second one is to not 
generate an event on recovery. Moving the code out would not help. The checks 
are on opposite sides of the method and simple.
6) We could still have an empty queue that was why I left it. I just noticed 
that that case would be caught by the {{getLeafQueue}} so we should be OK with 
removing.
7) fixed that, it should have been removed

1) I have chosen to use the utility class solution and clean up a bit more. 
Keeping the QueuePlacementPolicy around in the allocation does not really help 
as the rules are really only relevant in the QueuePlacementManager in the new 
setup. There is no logic beside the rule list which is not 1:1 with the config 
that we could keep around.
2) fixed the reference (I used javadoc as there was nothing for other comments, 
now it is just a plain comment)
3) removed the comment and code
4) fixed
5) the tests look really similar but they are not. They test slight variations: 
the first two checks make sure the specified rule and create user rule trigger 
correctly. The last two make sure that the specified rule triggers but not the 
user rule and that the default rule does the catch it correctly.
6) fixed that, I left it at first with a view on possible extension later with 
other bits. I now moved the parent create code out and left the loop for 
elements which clears things up.
7) added a RuleMap class 

[jira] [Updated] (YARN-8967) Change FairScheduler to use PlacementRule interface

2019-03-18 Thread Wilfred Spiegelenburg (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8967?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wilfred Spiegelenburg updated YARN-8967:

Attachment: YARN-8967.008.patch

> Change FairScheduler to use PlacementRule interface
> ---
>
> Key: YARN-8967
> URL: https://issues.apache.org/jira/browse/YARN-8967
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacityscheduler, fairscheduler
>Reporter: Wilfred Spiegelenburg
>Assignee: Wilfred Spiegelenburg
>Priority: Major
> Attachments: YARN-8967.001.patch, YARN-8967.002.patch, 
> YARN-8967.003.patch, YARN-8967.004.patch, YARN-8967.005.patch, 
> YARN-8967.006.patch, YARN-8967.007.patch, YARN-8967.008.patch
>
>
> The PlacementRule interface was introduced to be used by all schedulers as 
> per YARN-3635. The CapacityScheduler is using it but the FairScheduler is not 
> and is using its own rule definition.
> YARN-8948 cleans up the implementation and removes the CS references which 
> should allow this change to go through.
> This would be the first step in using one placement rule engine for both 
> schedulers.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9358) Add javadoc to new methods introduced in FSQueueMetrics with YARN-9322

2019-03-18 Thread Adam Antal (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16795025#comment-16795025
 ] 

Adam Antal commented on YARN-9358:
--

Straightforward patch. 

One minor comment:
In {{void setMaxAMShare}} and {{setAMResourceUsage}} you probably didn't mean
{quote}
the returned Resource also contains custom resource types.
{quote}

because they're setters and not getters.

> Add javadoc to new methods introduced in FSQueueMetrics with YARN-9322
> --
>
> Key: YARN-9358
> URL: https://issues.apache.org/jira/browse/YARN-9358
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Szilard Nemeth
>Assignee: Zoltan Siegl
>Priority: Major
> Attachments: YARN-9358.001.patch
>
>
> This is a follow-up for YARN-9322, covering javadoc changes as discussed with 
> [~templedf] earlier.
> As discussed with Daniel, we need to add javadoc for the new methods 
> introduced with YARN-9322 and also for the modified methods. 
> The javadoc should refer to the fact that Resource Types are also included in 
> the Resource object in case of get/set as well.
> The methods are: 
> 1. getFairShare / setFairShare
> 2. getSteadyFairShare / setSteadyFairShare
> 3. getMinShare / setMinShare
> 4. getMaxShare / setMaxShare
> 5. getMaxAMShare / setMaxAMShare
> 6. getAMResourceUsage / setAMResourceUsage
> Moreover, a javadoc could be added to the constructor of FSQueueMetrics as 
> well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9396) YARN_RM_CONTAINER_CREATED published twice to ATS

2019-03-18 Thread Prabhu Joseph (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16795015#comment-16795015
 ] 

Prabhu Joseph commented on YARN-9396:
-

Currently YARN_RM_CONTAINER_CREATED is published for below Container state 
transitions, 

NEW -> ALLOCATED
RESERVED -> ALLOCATED
NEW -> ACQUIRED
ALLOCATED -> ACQUIRED

where ALLOCATED -> ACQUIRED does not need to publish.

> YARN_RM_CONTAINER_CREATED published twice to ATS
> 
>
> Key: YARN-9396
> URL: https://issues.apache.org/jira/browse/YARN-9396
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: ATSv2
>Affects Versions: 3.2.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
>
> RM Container Created event published twice - one from 
> {{ContainerStartedTransition}} (NEW -> ALLOCATED) and another from 
> {{AcquiredTransition}} (ALLOCATED -> ACQUIRED)
> {code}
> 2019-03-18 13:10:13,551 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: 
> container_e11_1552914589043_0001_01_01 Container Transitioned from NEW to 
> ALLOCATED
> 2019-03-18 13:10:13,597 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: 
> container_e11_1552914589043_0001_01_01 Container Transitioned from 
> ALLOCATED to ACQUIRED
> {code}
> *Duplicate Events:*
> {code}
> container_e11_1552914589043_0001_01_01 start:
> 2019-03-18 13:10:13,556 DEBUG 
> org.apache.hadoop.yarn.server.resourcemanager.metrics.TimelineServiceV2Publisher:
>  Publishing the entity TimelineEntity[type='YARN_CONTAINER', 
> id='container_e11_1552914589043_0001_01_01'], JSON-style content: 
> {"metrics":[],"events":[{"id":"YARN_RM_CONTAINER_CREATED","timestamp":1552914613542,"info":{}}],"createdtime":1552914613542,"idprefix":0,"info":{"YARN_CONTAINER_ALLOCATED_PORT":45454,"YARN_CONTAINER_ALLOCATED_MEMORY":2048,"YARN_CONTAINER_ALLOCATED_PRIORITY":0,"YARN_CONTAINER_ALLOCATED_HOST":"yarn-ats-2","YARN_CONTAINER_ALLOCATED_HOST_HTTP_ADDRESS":"http://yarn-ats-2:8042","YARN_CONTAINER_ALLOCATED_VCORE":1},"configs":{},"isrelatedto":{},"relatesto":{},"type":"YARN_CONTAINER","id":"container_e11_1552914589043_0001_01_01"}
> 2019-03-18 13:10:13,598 DEBUG 
> org.apache.hadoop.yarn.server.resourcemanager.metrics.TimelineServiceV2Publisher:
>  Publishing the entity TimelineEntity[type='YARN_CONTAINER', 
> id='container_e11_1552914589043_0001_01_01'], JSON-style content: 
> {"metrics":[],"events":[{"id":"YARN_RM_CONTAINER_CREATED","timestamp":1552914613542,"info":{}}],"createdtime":1552914613542,"idprefix":0,"info":{"YARN_CONTAINER_ALLOCATED_PORT":45454,"YARN_CONTAINER_ALLOCATED_MEMORY":2048,"YARN_CONTAINER_ALLOCATED_PRIORITY":0,"YARN_CONTAINER_ALLOCATED_HOST":"yarn-ats-2","YARN_CONTAINER_ALLOCATED_HOST_HTTP_ADDRESS":"http://yarn-ats-2:8042","YARN_CONTAINER_ALLOCATED_VCORE":1},"configs":{},"isrelatedto":{},"relatesto":{},"type":"YARN_CONTAINER","id":"container_e11_1552914589043_0001_01_01"}
> container_e11_1552914589043_0001_01_02 start:
> 2019-03-18 13:10:21,599 DEBUG 
> org.apache.hadoop.yarn.server.resourcemanager.metrics.TimelineServiceV2Publisher:
>  Publishing the entity TimelineEntity[type='YARN_CONTAINER', 
> id='container_e11_1552914589043_0001_01_02'], JSON-style content: 
> {"metrics":[],"events":[{"id":"YARN_RM_CONTAINER_CREATED","timestamp":1552914621596,"info":{}}],"createdtime":1552914621596,"idprefix":0,"info":{"YARN_CONTAINER_ALLOCATED_PORT":45454,"YARN_CONTAINER_ALLOCATED_MEMORY":1024,"YARN_CONTAINER_ALLOCATED_PRIORITY":20,"YARN_CONTAINER_ALLOCATED_HOST":"yarn-ats-2","YARN_CONTAINER_ALLOCATED_HOST_HTTP_ADDRESS":"http://yarn-ats-2:8042","YARN_CONTAINER_ALLOCATED_VCORE":1},"configs":{},"isrelatedto":{},"relatesto":{},"type":"YARN_CONTAINER","id":"container_e11_1552914589043_0001_01_02"}
> 2019-03-18 13:10:22,344 DEBUG 
> org.apache.hadoop.yarn.server.resourcemanager.metrics.TimelineServiceV2Publisher:
>  Publishing the entity TimelineEntity[type='YARN_CONTAINER', 
> id='container_e11_1552914589043_0001_01_02'], JSON-style content: 
> {"metrics":[],"events":[{"id":"YARN_RM_CONTAINER_CREATED","timestamp":1552914621596,"info":{}}],"createdtime":1552914621596,"idprefix":0,"info":{"YARN_CONTAINER_ALLOCATED_PORT":45454,"YARN_CONTAINER_ALLOCATED_MEMORY":1024,"YARN_CONTAINER_ALLOCATED_PRIORITY":20,"YARN_CONTAINER_ALLOCATED_HOST":"yarn-ats-2","YARN_CONTAINER_ALLOCATED_HOST_HTTP_ADDRESS":"http://yarn-ats-2:8042","YARN_CONTAINER_ALLOCATED_VCORE":1},"configs":{},"isrelatedto":{},"relatesto":{},"type":"YARN_CONTAINER","id":"container_e11_1552914589043_0001_01_02"}
> container_e11_1552914589043_0001_01_03 start:
> 2019-03-18 13:10:27,918 DEBUG 
> org.apache.hadoop.yarn.server.resourcemanager.metrics.TimelineServiceV2Publisher:
>  Publishing the entity 

[jira] [Created] (YARN-9396) YARN_RM_CONTAINER_CREATED published twice to ATS

2019-03-18 Thread Prabhu Joseph (JIRA)
Prabhu Joseph created YARN-9396:
---

 Summary: YARN_RM_CONTAINER_CREATED published twice to ATS
 Key: YARN-9396
 URL: https://issues.apache.org/jira/browse/YARN-9396
 Project: Hadoop YARN
  Issue Type: Bug
  Components: ATSv2
Affects Versions: 3.2.0
Reporter: Prabhu Joseph
Assignee: Prabhu Joseph


RM Container Created event published twice - one from 
ContainerStartedTransition (NEW -> ALLOCATED) and another from 
AcquiredTransition (ALLOCATED -> ACQUIRED)

{code}
2019-03-18 13:10:13,551 INFO 
org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: 
container_e11_1552914589043_0001_01_01 Container Transitioned from NEW to 
ALLOCATED
2019-03-18 13:10:13,597 INFO 
org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: 
container_e11_1552914589043_0001_01_01 Container Transitioned from 
ALLOCATED to ACQUIRED
{code}


Duplicate Events:

{code}
container_e11_1552914589043_0001_01_01 start:

2019-03-18 13:10:13,556 INFO 
org.apache.hadoop.yarn.server.resourcemanager.metrics.TimelineServiceV2Publisher:
 Publishing the entity TimelineEntity[type='YARN_CONTAINER', 
id='container_e11_1552914589043_0001_01_01'], JSON-style content: 
{"metrics":[],"events":[{"id":"YARN_RM_CONTAINER_CREATED","timestamp":1552914613542,"info":{}}],"createdtime":1552914613542,"idprefix":0,"info":{"YARN_CONTAINER_ALLOCATED_PORT":45454,"YARN_CONTAINER_ALLOCATED_MEMORY":2048,"YARN_CONTAINER_ALLOCATED_PRIORITY":0,"YARN_CONTAINER_ALLOCATED_HOST":"yarn-ats-2","YARN_CONTAINER_ALLOCATED_HOST_HTTP_ADDRESS":"http://yarn-ats-2:8042","YARN_CONTAINER_ALLOCATED_VCORE":1},"configs":{},"isrelatedto":{},"relatesto":{},"type":"YARN_CONTAINER","id":"container_e11_1552914589043_0001_01_01"}
2019-03-18 13:10:13,598 INFO 
org.apache.hadoop.yarn.server.resourcemanager.metrics.TimelineServiceV2Publisher:
 Publishing the entity TimelineEntity[type='YARN_CONTAINER', 
id='container_e11_1552914589043_0001_01_01'], JSON-style content: 
{"metrics":[],"events":[{"id":"YARN_RM_CONTAINER_CREATED","timestamp":1552914613542,"info":{}}],"createdtime":1552914613542,"idprefix":0,"info":{"YARN_CONTAINER_ALLOCATED_PORT":45454,"YARN_CONTAINER_ALLOCATED_MEMORY":2048,"YARN_CONTAINER_ALLOCATED_PRIORITY":0,"YARN_CONTAINER_ALLOCATED_HOST":"yarn-ats-2","YARN_CONTAINER_ALLOCATED_HOST_HTTP_ADDRESS":"http://yarn-ats-2:8042","YARN_CONTAINER_ALLOCATED_VCORE":1},"configs":{},"isrelatedto":{},"relatesto":{},"type":"YARN_CONTAINER","id":"container_e11_1552914589043_0001_01_01"}


container_e11_1552914589043_0001_01_02 start:

2019-03-18 13:10:21,599 INFO 
org.apache.hadoop.yarn.server.resourcemanager.metrics.TimelineServiceV2Publisher:
 Publishing the entity TimelineEntity[type='YARN_CONTAINER', 
id='container_e11_1552914589043_0001_01_02'], JSON-style content: 
{"metrics":[],"events":[{"id":"YARN_RM_CONTAINER_CREATED","timestamp":1552914621596,"info":{}}],"createdtime":1552914621596,"idprefix":0,"info":{"YARN_CONTAINER_ALLOCATED_PORT":45454,"YARN_CONTAINER_ALLOCATED_MEMORY":1024,"YARN_CONTAINER_ALLOCATED_PRIORITY":20,"YARN_CONTAINER_ALLOCATED_HOST":"yarn-ats-2","YARN_CONTAINER_ALLOCATED_HOST_HTTP_ADDRESS":"http://yarn-ats-2:8042","YARN_CONTAINER_ALLOCATED_VCORE":1},"configs":{},"isrelatedto":{},"relatesto":{},"type":"YARN_CONTAINER","id":"container_e11_1552914589043_0001_01_02"}
2019-03-18 13:10:22,344 INFO 
org.apache.hadoop.yarn.server.resourcemanager.metrics.TimelineServiceV2Publisher:
 Publishing the entity TimelineEntity[type='YARN_CONTAINER', 
id='container_e11_1552914589043_0001_01_02'], JSON-style content: 
{"metrics":[],"events":[{"id":"YARN_RM_CONTAINER_CREATED","timestamp":1552914621596,"info":{}}],"createdtime":1552914621596,"idprefix":0,"info":{"YARN_CONTAINER_ALLOCATED_PORT":45454,"YARN_CONTAINER_ALLOCATED_MEMORY":1024,"YARN_CONTAINER_ALLOCATED_PRIORITY":20,"YARN_CONTAINER_ALLOCATED_HOST":"yarn-ats-2","YARN_CONTAINER_ALLOCATED_HOST_HTTP_ADDRESS":"http://yarn-ats-2:8042","YARN_CONTAINER_ALLOCATED_VCORE":1},"configs":{},"isrelatedto":{},"relatesto":{},"type":"YARN_CONTAINER","id":"container_e11_1552914589043_0001_01_02"}


container_e11_1552914589043_0001_01_03 start:

2019-03-18 13:10:27,918 INFO 
org.apache.hadoop.yarn.server.resourcemanager.metrics.TimelineServiceV2Publisher:
 Publishing the entity TimelineEntity[type='YARN_CONTAINER', 
id='container_e11_1552914589043_0001_01_03'], JSON-style content: 

[jira] [Updated] (YARN-9396) YARN_RM_CONTAINER_CREATED published twice to ATS

2019-03-18 Thread Prabhu Joseph (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9396?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph updated YARN-9396:

Description: 
RM Container Created event published twice - one from 
{{ContainerStartedTransition}} (NEW -> ALLOCATED) and another from 
{{AcquiredTransition}} (ALLOCATED -> ACQUIRED)

{code}
2019-03-18 13:10:13,551 INFO 
org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: 
container_e11_1552914589043_0001_01_01 Container Transitioned from NEW to 
ALLOCATED
2019-03-18 13:10:13,597 INFO 
org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: 
container_e11_1552914589043_0001_01_01 Container Transitioned from 
ALLOCATED to ACQUIRED
{code}


*Duplicate Events:*

{code}
container_e11_1552914589043_0001_01_01 start:

2019-03-18 13:10:13,556 DEBUG 
org.apache.hadoop.yarn.server.resourcemanager.metrics.TimelineServiceV2Publisher:
 Publishing the entity TimelineEntity[type='YARN_CONTAINER', 
id='container_e11_1552914589043_0001_01_01'], JSON-style content: 
{"metrics":[],"events":[{"id":"YARN_RM_CONTAINER_CREATED","timestamp":1552914613542,"info":{}}],"createdtime":1552914613542,"idprefix":0,"info":{"YARN_CONTAINER_ALLOCATED_PORT":45454,"YARN_CONTAINER_ALLOCATED_MEMORY":2048,"YARN_CONTAINER_ALLOCATED_PRIORITY":0,"YARN_CONTAINER_ALLOCATED_HOST":"yarn-ats-2","YARN_CONTAINER_ALLOCATED_HOST_HTTP_ADDRESS":"http://yarn-ats-2:8042","YARN_CONTAINER_ALLOCATED_VCORE":1},"configs":{},"isrelatedto":{},"relatesto":{},"type":"YARN_CONTAINER","id":"container_e11_1552914589043_0001_01_01"}
2019-03-18 13:10:13,598 DEBUG 
org.apache.hadoop.yarn.server.resourcemanager.metrics.TimelineServiceV2Publisher:
 Publishing the entity TimelineEntity[type='YARN_CONTAINER', 
id='container_e11_1552914589043_0001_01_01'], JSON-style content: 
{"metrics":[],"events":[{"id":"YARN_RM_CONTAINER_CREATED","timestamp":1552914613542,"info":{}}],"createdtime":1552914613542,"idprefix":0,"info":{"YARN_CONTAINER_ALLOCATED_PORT":45454,"YARN_CONTAINER_ALLOCATED_MEMORY":2048,"YARN_CONTAINER_ALLOCATED_PRIORITY":0,"YARN_CONTAINER_ALLOCATED_HOST":"yarn-ats-2","YARN_CONTAINER_ALLOCATED_HOST_HTTP_ADDRESS":"http://yarn-ats-2:8042","YARN_CONTAINER_ALLOCATED_VCORE":1},"configs":{},"isrelatedto":{},"relatesto":{},"type":"YARN_CONTAINER","id":"container_e11_1552914589043_0001_01_01"}


container_e11_1552914589043_0001_01_02 start:

2019-03-18 13:10:21,599 DEBUG 
org.apache.hadoop.yarn.server.resourcemanager.metrics.TimelineServiceV2Publisher:
 Publishing the entity TimelineEntity[type='YARN_CONTAINER', 
id='container_e11_1552914589043_0001_01_02'], JSON-style content: 
{"metrics":[],"events":[{"id":"YARN_RM_CONTAINER_CREATED","timestamp":1552914621596,"info":{}}],"createdtime":1552914621596,"idprefix":0,"info":{"YARN_CONTAINER_ALLOCATED_PORT":45454,"YARN_CONTAINER_ALLOCATED_MEMORY":1024,"YARN_CONTAINER_ALLOCATED_PRIORITY":20,"YARN_CONTAINER_ALLOCATED_HOST":"yarn-ats-2","YARN_CONTAINER_ALLOCATED_HOST_HTTP_ADDRESS":"http://yarn-ats-2:8042","YARN_CONTAINER_ALLOCATED_VCORE":1},"configs":{},"isrelatedto":{},"relatesto":{},"type":"YARN_CONTAINER","id":"container_e11_1552914589043_0001_01_02"}
2019-03-18 13:10:22,344 DEBUG 
org.apache.hadoop.yarn.server.resourcemanager.metrics.TimelineServiceV2Publisher:
 Publishing the entity TimelineEntity[type='YARN_CONTAINER', 
id='container_e11_1552914589043_0001_01_02'], JSON-style content: 
{"metrics":[],"events":[{"id":"YARN_RM_CONTAINER_CREATED","timestamp":1552914621596,"info":{}}],"createdtime":1552914621596,"idprefix":0,"info":{"YARN_CONTAINER_ALLOCATED_PORT":45454,"YARN_CONTAINER_ALLOCATED_MEMORY":1024,"YARN_CONTAINER_ALLOCATED_PRIORITY":20,"YARN_CONTAINER_ALLOCATED_HOST":"yarn-ats-2","YARN_CONTAINER_ALLOCATED_HOST_HTTP_ADDRESS":"http://yarn-ats-2:8042","YARN_CONTAINER_ALLOCATED_VCORE":1},"configs":{},"isrelatedto":{},"relatesto":{},"type":"YARN_CONTAINER","id":"container_e11_1552914589043_0001_01_02"}


container_e11_1552914589043_0001_01_03 start:

2019-03-18 13:10:27,918 DEBUG 
org.apache.hadoop.yarn.server.resourcemanager.metrics.TimelineServiceV2Publisher:
 Publishing the entity TimelineEntity[type='YARN_CONTAINER', 
id='container_e11_1552914589043_0001_01_03'], JSON-style content: 
{"metrics":[],"events":[{"id":"YARN_RM_CONTAINER_CREATED","timestamp":1552914627917,"info":{}}],"createdtime":1552914627917,"idprefix":0,"info":{"YARN_CONTAINER_ALLOCATED_PORT":45454,"YARN_CONTAINER_ALLOCATED_MEMORY":2048,"YARN_CONTAINER_ALLOCATED_PRIORITY":10,"YARN_CONTAINER_ALLOCATED_HOST":"yarn-ats-3","YARN_CONTAINER_ALLOCATED_HOST_HTTP_ADDRESS":"http://yarn-ats-3:8042","YARN_CONTAINER_ALLOCATED_VCORE":1},"configs":{},"isrelatedto":{},"relatesto":{},"type":"YARN_CONTAINER","id":"container_e11_1552914589043_0001_01_03"}
2019-03-18 13:10:28,448 DEBUG 
org.apache.hadoop.yarn.server.resourcemanager.metrics.TimelineServiceV2Publisher:
 Publishing the 

[jira] [Updated] (YARN-9396) YARN_RM_CONTAINER_CREATED published twice to ATS

2019-03-18 Thread Prabhu Joseph (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9396?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph updated YARN-9396:

Description: 
RM Container Created event published twice - one from 
{{ContainerStartedTransition}} (NEW -> ALLOCATED) and another from 
{{AcquiredTransition}} (ALLOCATED -> ACQUIRED)

{code}
2019-03-18 13:10:13,551 DEBUG 
org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: 
container_e11_1552914589043_0001_01_01 Container Transitioned from NEW to 
ALLOCATED
2019-03-18 13:10:13,597 DEBUG 
org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: 
container_e11_1552914589043_0001_01_01 Container Transitioned from 
ALLOCATED to ACQUIRED
{code}


*Duplicate Events:*

{code}
container_e11_1552914589043_0001_01_01 start:

2019-03-18 13:10:13,556 DEBUG 
org.apache.hadoop.yarn.server.resourcemanager.metrics.TimelineServiceV2Publisher:
 Publishing the entity TimelineEntity[type='YARN_CONTAINER', 
id='container_e11_1552914589043_0001_01_01'], JSON-style content: 
{"metrics":[],"events":[{"id":"YARN_RM_CONTAINER_CREATED","timestamp":1552914613542,"info":{}}],"createdtime":1552914613542,"idprefix":0,"info":{"YARN_CONTAINER_ALLOCATED_PORT":45454,"YARN_CONTAINER_ALLOCATED_MEMORY":2048,"YARN_CONTAINER_ALLOCATED_PRIORITY":0,"YARN_CONTAINER_ALLOCATED_HOST":"yarn-ats-2","YARN_CONTAINER_ALLOCATED_HOST_HTTP_ADDRESS":"http://yarn-ats-2:8042","YARN_CONTAINER_ALLOCATED_VCORE":1},"configs":{},"isrelatedto":{},"relatesto":{},"type":"YARN_CONTAINER","id":"container_e11_1552914589043_0001_01_01"}
2019-03-18 13:10:13,598 DEBUG 
org.apache.hadoop.yarn.server.resourcemanager.metrics.TimelineServiceV2Publisher:
 Publishing the entity TimelineEntity[type='YARN_CONTAINER', 
id='container_e11_1552914589043_0001_01_01'], JSON-style content: 
{"metrics":[],"events":[{"id":"YARN_RM_CONTAINER_CREATED","timestamp":1552914613542,"info":{}}],"createdtime":1552914613542,"idprefix":0,"info":{"YARN_CONTAINER_ALLOCATED_PORT":45454,"YARN_CONTAINER_ALLOCATED_MEMORY":2048,"YARN_CONTAINER_ALLOCATED_PRIORITY":0,"YARN_CONTAINER_ALLOCATED_HOST":"yarn-ats-2","YARN_CONTAINER_ALLOCATED_HOST_HTTP_ADDRESS":"http://yarn-ats-2:8042","YARN_CONTAINER_ALLOCATED_VCORE":1},"configs":{},"isrelatedto":{},"relatesto":{},"type":"YARN_CONTAINER","id":"container_e11_1552914589043_0001_01_01"}


container_e11_1552914589043_0001_01_02 start:

2019-03-18 13:10:21,599 DEBUG 
org.apache.hadoop.yarn.server.resourcemanager.metrics.TimelineServiceV2Publisher:
 Publishing the entity TimelineEntity[type='YARN_CONTAINER', 
id='container_e11_1552914589043_0001_01_02'], JSON-style content: 
{"metrics":[],"events":[{"id":"YARN_RM_CONTAINER_CREATED","timestamp":1552914621596,"info":{}}],"createdtime":1552914621596,"idprefix":0,"info":{"YARN_CONTAINER_ALLOCATED_PORT":45454,"YARN_CONTAINER_ALLOCATED_MEMORY":1024,"YARN_CONTAINER_ALLOCATED_PRIORITY":20,"YARN_CONTAINER_ALLOCATED_HOST":"yarn-ats-2","YARN_CONTAINER_ALLOCATED_HOST_HTTP_ADDRESS":"http://yarn-ats-2:8042","YARN_CONTAINER_ALLOCATED_VCORE":1},"configs":{},"isrelatedto":{},"relatesto":{},"type":"YARN_CONTAINER","id":"container_e11_1552914589043_0001_01_02"}
2019-03-18 13:10:22,344 DEBUG 
org.apache.hadoop.yarn.server.resourcemanager.metrics.TimelineServiceV2Publisher:
 Publishing the entity TimelineEntity[type='YARN_CONTAINER', 
id='container_e11_1552914589043_0001_01_02'], JSON-style content: 
{"metrics":[],"events":[{"id":"YARN_RM_CONTAINER_CREATED","timestamp":1552914621596,"info":{}}],"createdtime":1552914621596,"idprefix":0,"info":{"YARN_CONTAINER_ALLOCATED_PORT":45454,"YARN_CONTAINER_ALLOCATED_MEMORY":1024,"YARN_CONTAINER_ALLOCATED_PRIORITY":20,"YARN_CONTAINER_ALLOCATED_HOST":"yarn-ats-2","YARN_CONTAINER_ALLOCATED_HOST_HTTP_ADDRESS":"http://yarn-ats-2:8042","YARN_CONTAINER_ALLOCATED_VCORE":1},"configs":{},"isrelatedto":{},"relatesto":{},"type":"YARN_CONTAINER","id":"container_e11_1552914589043_0001_01_02"}


container_e11_1552914589043_0001_01_03 start:

2019-03-18 13:10:27,918 DEBUG 
org.apache.hadoop.yarn.server.resourcemanager.metrics.TimelineServiceV2Publisher:
 Publishing the entity TimelineEntity[type='YARN_CONTAINER', 
id='container_e11_1552914589043_0001_01_03'], JSON-style content: 
{"metrics":[],"events":[{"id":"YARN_RM_CONTAINER_CREATED","timestamp":1552914627917,"info":{}}],"createdtime":1552914627917,"idprefix":0,"info":{"YARN_CONTAINER_ALLOCATED_PORT":45454,"YARN_CONTAINER_ALLOCATED_MEMORY":2048,"YARN_CONTAINER_ALLOCATED_PRIORITY":10,"YARN_CONTAINER_ALLOCATED_HOST":"yarn-ats-3","YARN_CONTAINER_ALLOCATED_HOST_HTTP_ADDRESS":"http://yarn-ats-3:8042","YARN_CONTAINER_ALLOCATED_VCORE":1},"configs":{},"isrelatedto":{},"relatesto":{},"type":"YARN_CONTAINER","id":"container_e11_1552914589043_0001_01_03"}
2019-03-18 13:10:28,448 DEBUG 
org.apache.hadoop.yarn.server.resourcemanager.metrics.TimelineServiceV2Publisher:
 Publishing the 

[jira] [Updated] (YARN-9396) YARN_RM_CONTAINER_CREATED published twice to ATS

2019-03-18 Thread Prabhu Joseph (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9396?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph updated YARN-9396:

Description: 
RM Container Created event published twice - one from 
{{ContainerStartedTransition}} (NEW -> ALLOCATED) and another from 
{{AcquiredTransition}} (ALLOCATED -> ACQUIRED)

{code}
2019-03-18 13:10:13,551 DEBUG 
org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: 
container_e11_1552914589043_0001_01_01 Container Transitioned from NEW to 
ALLOCATED
2019-03-18 13:10:13,597 DEBUG 
org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: 
container_e11_1552914589043_0001_01_01 Container Transitioned from 
ALLOCATED to ACQUIRED
{code}


*Duplicate Events:*

{code}
container_e11_1552914589043_0001_01_01 start:

2019-03-18 13:10:13,556 DEBUG 
org.apache.hadoop.yarn.server.resourcemanager.metrics.TimelineServiceV2Publisher:
 Publishing the entity TimelineEntity[type='YARN_CONTAINER', 
id='container_e11_1552914589043_0001_01_01'], JSON-style content: 
{"metrics":[],"events":[{"id":"YARN_RM_CONTAINER_CREATED","timestamp":1552914613542,"info":{}}],"createdtime":1552914613542,"idprefix":0,"info":{"YARN_CONTAINER_ALLOCATED_PORT":45454,"YARN_CONTAINER_ALLOCATED_MEMORY":2048,"YARN_CONTAINER_ALLOCATED_PRIORITY":0,"YARN_CONTAINER_ALLOCATED_HOST":"yarn-ats-2","YARN_CONTAINER_ALLOCATED_HOST_HTTP_ADDRESS":"http://yarn-ats-2:8042","YARN_CONTAINER_ALLOCATED_VCORE":1},"configs":{},"isrelatedto":{},"relatesto":{},"type":"YARN_CONTAINER","id":"container_e11_1552914589043_0001_01_01"}
2019-03-18 13:10:13,598 DEBUG 
org.apache.hadoop.yarn.server.resourcemanager.metrics.TimelineServiceV2Publisher:
 Publishing the entity TimelineEntity[type='YARN_CONTAINER', 
id='container_e11_1552914589043_0001_01_01'], JSON-style content: 
{"metrics":[],"events":[{"id":"YARN_RM_CONTAINER_CREATED","timestamp":1552914613542,"info":{}}],"createdtime":1552914613542,"idprefix":0,"info":{"YARN_CONTAINER_ALLOCATED_PORT":45454,"YARN_CONTAINER_ALLOCATED_MEMORY":2048,"YARN_CONTAINER_ALLOCATED_PRIORITY":0,"YARN_CONTAINER_ALLOCATED_HOST":"yarn-ats-2","YARN_CONTAINER_ALLOCATED_HOST_HTTP_ADDRESS":"http://yarn-ats-2:8042","YARN_CONTAINER_ALLOCATED_VCORE":1},"configs":{},"isrelatedto":{},"relatesto":{},"type":"YARN_CONTAINER","id":"container_e11_1552914589043_0001_01_01"}


container_e11_1552914589043_0001_01_02 start:

2019-03-18 13:10:21,599 DEBUG 
org.apache.hadoop.yarn.server.resourcemanager.metrics.TimelineServiceV2Publisher:
 Publishing the entity TimelineEntity[type='YARN_CONTAINER', 
id='container_e11_1552914589043_0001_01_02'], JSON-style content: 
{"metrics":[],"events":[{"id":"YARN_RM_CONTAINER_CREATED","timestamp":1552914621596,"info":{}}],"createdtime":1552914621596,"idprefix":0,"info":{"YARN_CONTAINER_ALLOCATED_PORT":45454,"YARN_CONTAINER_ALLOCATED_MEMORY":1024,"YARN_CONTAINER_ALLOCATED_PRIORITY":20,"YARN_CONTAINER_ALLOCATED_HOST":"yarn-ats-2","YARN_CONTAINER_ALLOCATED_HOST_HTTP_ADDRESS":"http://yarn-ats-2:8042","YARN_CONTAINER_ALLOCATED_VCORE":1},"configs":{},"isrelatedto":{},"relatesto":{},"type":"YARN_CONTAINER","id":"container_e11_1552914589043_0001_01_02"}
2019-03-18 13:10:22,344 DEBUG 
org.apache.hadoop.yarn.server.resourcemanager.metrics.TimelineServiceV2Publisher:
 Publishing the entity TimelineEntity[type='YARN_CONTAINER', 
id='container_e11_1552914589043_0001_01_02'], JSON-style content: 
{"metrics":[],"events":[{"id":"YARN_RM_CONTAINER_CREATED","timestamp":1552914621596,"info":{}}],"createdtime":1552914621596,"idprefix":0,"info":{"YARN_CONTAINER_ALLOCATED_PORT":45454,"YARN_CONTAINER_ALLOCATED_MEMORY":1024,"YARN_CONTAINER_ALLOCATED_PRIORITY":20,"YARN_CONTAINER_ALLOCATED_HOST":"yarn-ats-2","YARN_CONTAINER_ALLOCATED_HOST_HTTP_ADDRESS":"http://yarn-ats-2:8042","YARN_CONTAINER_ALLOCATED_VCORE":1},"configs":{},"isrelatedto":{},"relatesto":{},"type":"YARN_CONTAINER","id":"container_e11_1552914589043_0001_01_02"}


container_e11_1552914589043_0001_01_03 start:

2019-03-18 13:10:27,918 INFO 
org.apache.hadoop.yarn.server.resourcemanager.metrics.TimelineServiceV2Publisher:
 Publishing the entity TimelineEntity[type='YARN_CONTAINER', 
id='container_e11_1552914589043_0001_01_03'], JSON-style content: 
{"metrics":[],"events":[{"id":"YARN_RM_CONTAINER_CREATED","timestamp":1552914627917,"info":{}}],"createdtime":1552914627917,"idprefix":0,"info":{"YARN_CONTAINER_ALLOCATED_PORT":45454,"YARN_CONTAINER_ALLOCATED_MEMORY":2048,"YARN_CONTAINER_ALLOCATED_PRIORITY":10,"YARN_CONTAINER_ALLOCATED_HOST":"yarn-ats-3","YARN_CONTAINER_ALLOCATED_HOST_HTTP_ADDRESS":"http://yarn-ats-3:8042","YARN_CONTAINER_ALLOCATED_VCORE":1},"configs":{},"isrelatedto":{},"relatesto":{},"type":"YARN_CONTAINER","id":"container_e11_1552914589043_0001_01_03"}
2019-03-18 13:10:28,448 INFO 
org.apache.hadoop.yarn.server.resourcemanager.metrics.TimelineServiceV2Publisher:
 Publishing the 

[jira] [Updated] (YARN-9396) YARN_RM_CONTAINER_CREATED published twice to ATS

2019-03-18 Thread Prabhu Joseph (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9396?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph updated YARN-9396:

Description: 
RM Container Created event published twice - one from 
{{ContainerStartedTransition}} (NEW -> ALLOCATED) and another from 
{{AcquiredTransition}} (ALLOCATED -> ACQUIRED)

{code}
2019-03-18 13:10:13,551 INFO 
org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: 
container_e11_1552914589043_0001_01_01 Container Transitioned from NEW to 
ALLOCATED
2019-03-18 13:10:13,597 INFO 
org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: 
container_e11_1552914589043_0001_01_01 Container Transitioned from 
ALLOCATED to ACQUIRED
{code}


*Duplicate Events:*

{code}
container_e11_1552914589043_0001_01_01 start:

2019-03-18 13:10:13,556 INFO 
org.apache.hadoop.yarn.server.resourcemanager.metrics.TimelineServiceV2Publisher:
 Publishing the entity TimelineEntity[type='YARN_CONTAINER', 
id='container_e11_1552914589043_0001_01_01'], JSON-style content: 
{"metrics":[],"events":[{"id":"YARN_RM_CONTAINER_CREATED","timestamp":1552914613542,"info":{}}],"createdtime":1552914613542,"idprefix":0,"info":{"YARN_CONTAINER_ALLOCATED_PORT":45454,"YARN_CONTAINER_ALLOCATED_MEMORY":2048,"YARN_CONTAINER_ALLOCATED_PRIORITY":0,"YARN_CONTAINER_ALLOCATED_HOST":"yarn-ats-2","YARN_CONTAINER_ALLOCATED_HOST_HTTP_ADDRESS":"http://yarn-ats-2:8042","YARN_CONTAINER_ALLOCATED_VCORE":1},"configs":{},"isrelatedto":{},"relatesto":{},"type":"YARN_CONTAINER","id":"container_e11_1552914589043_0001_01_01"}
2019-03-18 13:10:13,598 INFO 
org.apache.hadoop.yarn.server.resourcemanager.metrics.TimelineServiceV2Publisher:
 Publishing the entity TimelineEntity[type='YARN_CONTAINER', 
id='container_e11_1552914589043_0001_01_01'], JSON-style content: 
{"metrics":[],"events":[{"id":"YARN_RM_CONTAINER_CREATED","timestamp":1552914613542,"info":{}}],"createdtime":1552914613542,"idprefix":0,"info":{"YARN_CONTAINER_ALLOCATED_PORT":45454,"YARN_CONTAINER_ALLOCATED_MEMORY":2048,"YARN_CONTAINER_ALLOCATED_PRIORITY":0,"YARN_CONTAINER_ALLOCATED_HOST":"yarn-ats-2","YARN_CONTAINER_ALLOCATED_HOST_HTTP_ADDRESS":"http://yarn-ats-2:8042","YARN_CONTAINER_ALLOCATED_VCORE":1},"configs":{},"isrelatedto":{},"relatesto":{},"type":"YARN_CONTAINER","id":"container_e11_1552914589043_0001_01_01"}


container_e11_1552914589043_0001_01_02 start:

2019-03-18 13:10:21,599 INFO 
org.apache.hadoop.yarn.server.resourcemanager.metrics.TimelineServiceV2Publisher:
 Publishing the entity TimelineEntity[type='YARN_CONTAINER', 
id='container_e11_1552914589043_0001_01_02'], JSON-style content: 
{"metrics":[],"events":[{"id":"YARN_RM_CONTAINER_CREATED","timestamp":1552914621596,"info":{}}],"createdtime":1552914621596,"idprefix":0,"info":{"YARN_CONTAINER_ALLOCATED_PORT":45454,"YARN_CONTAINER_ALLOCATED_MEMORY":1024,"YARN_CONTAINER_ALLOCATED_PRIORITY":20,"YARN_CONTAINER_ALLOCATED_HOST":"yarn-ats-2","YARN_CONTAINER_ALLOCATED_HOST_HTTP_ADDRESS":"http://yarn-ats-2:8042","YARN_CONTAINER_ALLOCATED_VCORE":1},"configs":{},"isrelatedto":{},"relatesto":{},"type":"YARN_CONTAINER","id":"container_e11_1552914589043_0001_01_02"}
2019-03-18 13:10:22,344 INFO 
org.apache.hadoop.yarn.server.resourcemanager.metrics.TimelineServiceV2Publisher:
 Publishing the entity TimelineEntity[type='YARN_CONTAINER', 
id='container_e11_1552914589043_0001_01_02'], JSON-style content: 
{"metrics":[],"events":[{"id":"YARN_RM_CONTAINER_CREATED","timestamp":1552914621596,"info":{}}],"createdtime":1552914621596,"idprefix":0,"info":{"YARN_CONTAINER_ALLOCATED_PORT":45454,"YARN_CONTAINER_ALLOCATED_MEMORY":1024,"YARN_CONTAINER_ALLOCATED_PRIORITY":20,"YARN_CONTAINER_ALLOCATED_HOST":"yarn-ats-2","YARN_CONTAINER_ALLOCATED_HOST_HTTP_ADDRESS":"http://yarn-ats-2:8042","YARN_CONTAINER_ALLOCATED_VCORE":1},"configs":{},"isrelatedto":{},"relatesto":{},"type":"YARN_CONTAINER","id":"container_e11_1552914589043_0001_01_02"}


container_e11_1552914589043_0001_01_03 start:

2019-03-18 13:10:27,918 INFO 
org.apache.hadoop.yarn.server.resourcemanager.metrics.TimelineServiceV2Publisher:
 Publishing the entity TimelineEntity[type='YARN_CONTAINER', 
id='container_e11_1552914589043_0001_01_03'], JSON-style content: 
{"metrics":[],"events":[{"id":"YARN_RM_CONTAINER_CREATED","timestamp":1552914627917,"info":{}}],"createdtime":1552914627917,"idprefix":0,"info":{"YARN_CONTAINER_ALLOCATED_PORT":45454,"YARN_CONTAINER_ALLOCATED_MEMORY":2048,"YARN_CONTAINER_ALLOCATED_PRIORITY":10,"YARN_CONTAINER_ALLOCATED_HOST":"yarn-ats-3","YARN_CONTAINER_ALLOCATED_HOST_HTTP_ADDRESS":"http://yarn-ats-3:8042","YARN_CONTAINER_ALLOCATED_VCORE":1},"configs":{},"isrelatedto":{},"relatesto":{},"type":"YARN_CONTAINER","id":"container_e11_1552914589043_0001_01_03"}
2019-03-18 13:10:28,448 INFO 
org.apache.hadoop.yarn.server.resourcemanager.metrics.TimelineServiceV2Publisher:
 Publishing the entity 

[jira] [Commented] (YARN-9358) Add javadoc to new methods introduced in FSQueueMetrics with YARN-9322

2019-03-18 Thread Zoltan Siegl (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16794995#comment-16794995
 ] 

Zoltan Siegl commented on YARN-9358:


Comment only patch, no new unit tests touched.

> Add javadoc to new methods introduced in FSQueueMetrics with YARN-9322
> --
>
> Key: YARN-9358
> URL: https://issues.apache.org/jira/browse/YARN-9358
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Szilard Nemeth
>Assignee: Zoltan Siegl
>Priority: Major
> Attachments: YARN-9358.001.patch
>
>
> This is a follow-up for YARN-9322, covering javadoc changes as discussed with 
> [~templedf] earlier.
> As discussed with Daniel, we need to add javadoc for the new methods 
> introduced with YARN-9322 and also for the modified methods. 
> The javadoc should refer to the fact that Resource Types are also included in 
> the Resource object in case of get/set as well.
> The methods are: 
> 1. getFairShare / setFairShare
> 2. getSteadyFairShare / setSteadyFairShare
> 3. getMinShare / setMinShare
> 4. getMaxShare / setMaxShare
> 5. getMaxAMShare / setMaxAMShare
> 6. getAMResourceUsage / setAMResourceUsage
> Moreover, a javadoc could be added to the constructor of FSQueueMetrics as 
> well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9267) General improvements in FpgaResourceHandlerImpl

2019-03-18 Thread Peter Bacsko (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9267?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Bacsko updated YARN-9267:
---
Attachment: YARN-9267-005.patch

> General improvements in FpgaResourceHandlerImpl
> ---
>
> Key: YARN-9267
> URL: https://issues.apache.org/jira/browse/YARN-9267
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Peter Bacsko
>Assignee: Peter Bacsko
>Priority: Major
> Attachments: YARN-9267-001.patch, YARN-9267-002.patch, 
> YARN-9267-003.patch, YARN-9267-004.patch, YARN-9267-005.patch
>
>
> Fix some problems in {{FpgaResourceHandlerImpl}}:
>  * {{preStart()}} does not reconfigure card with the same IP - we see it as a 
> problem. If you recompile the FPGA application, you must rename the aocx file 
> because the card will not be reprogrammed. Suggestion: instead of storing 
> Node<\->IPID mapping, store Node<\->IPID hash (like the SHA-256 of the 
> localized file).
>  * Switch to slf4j from Apache Commons Logging
>  * Some unused imports



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8499) ATS v2 should handle connection issues in general for all storages

2019-03-18 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16794966#comment-16794966
 ] 

Hadoop QA commented on YARN-8499:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m  3s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests
 {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
42s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 57s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server: The patch generated 1 new + 
1 unchanged - 1 fixed = 2 total (was 2) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 20s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests
 {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
14s{color} | {color:green} hadoop-yarn-server-timelineservice in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
26s{color} | {color:green} hadoop-yarn-server-timelineservice-hbase-client in 
the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 17m  
3s{color} | {color:green} hadoop-yarn-server-timelineservice-hbase-tests in the 
patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
34s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 78m 18s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| 

[jira] [Commented] (YARN-9267) General improvements in FpgaResourceHandlerImpl

2019-03-18 Thread Peter Bacsko (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9267?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16794947#comment-16794947
 ] 

Peter Bacsko commented on YARN-9267:


[~devaraj.k] thanks for the tip, will check this out.

> General improvements in FpgaResourceHandlerImpl
> ---
>
> Key: YARN-9267
> URL: https://issues.apache.org/jira/browse/YARN-9267
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Peter Bacsko
>Assignee: Peter Bacsko
>Priority: Major
> Attachments: YARN-9267-001.patch, YARN-9267-002.patch, 
> YARN-9267-003.patch, YARN-9267-004.patch
>
>
> Fix some problems in {{FpgaResourceHandlerImpl}}:
>  * {{preStart()}} does not reconfigure card with the same IP - we see it as a 
> problem. If you recompile the FPGA application, you must rename the aocx file 
> because the card will not be reprogrammed. Suggestion: instead of storing 
> Node<\->IPID mapping, store Node<\->IPID hash (like the SHA-256 of the 
> localized file).
>  * Switch to slf4j from Apache Commons Logging
>  * Some unused imports



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8148) Update decimal values for queue capacities shown on queue status cli

2019-03-18 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16794927#comment-16794927
 ] 

Hadoop QA commented on YARN-8148:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 14s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 57s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 25m 
32s{color} | {color:green} hadoop-yarn-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 76m 22s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | YARN-8148 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12962791/YARN-8148-002.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 8e716dc6b135 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 926d548 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/23732/testReport/ |
| Max. process+thread count | 682 (vs. ulimit of 1) |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/23732/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Update decimal values for queue capacities shown on queue status cli
> 

[jira] [Updated] (YARN-8499) ATS v2 should handle connection issues in general for all storages

2019-03-18 Thread Prabhu Joseph (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8499?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph updated YARN-8499:

Attachment: YARN-8499-007.patch

> ATS v2 should handle connection issues in general for all storages
> --
>
> Key: YARN-8499
> URL: https://issues.apache.org/jira/browse/YARN-8499
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: ATSv2
>Reporter: Sunil Govindan
>Assignee: Prabhu Joseph
>Priority: Major
>  Labels: atsv2
> Attachments: YARN-8499-001.patch, YARN-8499-002.patch, 
> YARN-8499-003.patch, YARN-8499-004.patch, YARN-8499-005.patch, 
> YARN-8499-006.patch, YARN-8499-007.patch
>
>
> Post YARN-8302, Hbase connection issues are handled in ATSv2. However this 
> could be made general by introducing an api in storage interface and 
> implementing in each of the storage as per the store semantics.
>  
> cc [~rohithsharma] [~vinodkv] [~vrushalic]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8499) ATS v2 should handle connection issues in general for all storages

2019-03-18 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16794913#comment-16794913
 ] 

Hadoop QA commented on YARN-8499:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
33s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
 0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 27s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests
 {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
43s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 50s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server: The patch generated 33 new 
+ 1 unchanged - 1 fixed = 34 total (was 2) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 14s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests
 {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
14s{color} | {color:green} hadoop-yarn-server-timelineservice in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
27s{color} | {color:green} hadoop-yarn-server-timelineservice-hbase-client in 
the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 11m 
16s{color} | {color:green} hadoop-yarn-server-timelineservice-hbase-tests in 
the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 67m 

[jira] [Commented] (YARN-9357) HBaseTimelineReaderImpl storage monitor log level change

2019-03-18 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16794892#comment-16794892
 ] 

Hadoop QA commented on YARN-9357:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 45s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  3s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
23s{color} | {color:green} hadoop-yarn-server-timelineservice-hbase-client in 
the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 43m 50s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | YARN-9357 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12962789/YARN-9357-002.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 9e666ba874f3 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 926d548 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/23730/testReport/ |
| Max. process+thread count | 412 (vs. ulimit of 1) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-client
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-client
 |

[jira] [Commented] (YARN-9373) HBaseTimelineSchemaCreator has to allow user to configure pre-splits

2019-03-18 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16794891#comment-16794891
 ] 

Hadoop QA commented on YARN-9373:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} pathlen {color} | {color:red}  0m  
0s{color} | {color:red} The patch appears to contain 4 files with names longer 
than 240 {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 29s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  1s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
27s{color} | {color:green} hadoop-yarn-server-timelineservice-hbase-client in 
the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 43m 14s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | YARN-9373 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12962786/YARN-9373-002.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 468a3d888918 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 926d548 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| pathlen | 
https://builds.apache.org/job/PreCommit-YARN-Build/23729/artifact/out/pathlen.txt
 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/23729/testReport/ |
| Max. process+thread count | 445 (vs. ulimit of 1) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-client
 U: 

[jira] [Commented] (YARN-9387) ATS HBase Custom tablenames (-entityTableName) not honored at read / write

2019-03-18 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16794882#comment-16794882
 ] 

Hadoop QA commented on YARN-9387:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
29m 44s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 51s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 44m 15s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | YARN-9387 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12962785/YARN-9387-001.patch |
| Optional Tests |  dupname  asflicense  mvnsite  |
| uname | Linux 9d928195f663 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 926d548 |
| maven | version: Apache Maven 3.3.9 |
| Max. process+thread count | 307 (vs. ulimit of 1) |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/23728/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> ATS HBase Custom tablenames (-entityTableName) not honored at read / write
> --
>
> Key: YARN-9387
> URL: https://issues.apache.org/jira/browse/YARN-9387
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: ATSv2
>Affects Versions: 3.1.2, 3.3.0, 3.2.1
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Critical
> Attachments: Screen Shot 2019-03-15 at 1.21.21 PM.png, 
> YARN-9387-001.patch
>
>
> {{HbaseTimelineSchemaCreator}} provides option to provide custom table name 
> and it creates properly. But The {{HBaseTimelineWriterImpl / 
> HBaseTimelineReaderImpl}} does not know the custom name and uses the table 
> with default name leading to data loss.
> NM {{TimelineCollector}} inserts to default table name 
> '{{prod.timelineservice.entity' }} which won;t be exist.
> {code}
> 2019-03-14 15:37:10,739 WARN 
> org.apache.hadoop.yarn.webapp.GenericExceptionHandler: INTERNAL_SERVER_ERROR
> javax.ws.rs.WebApplicationException: 
> org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 
> 20 actions: Table 'prod.timelineservice.entity' was not found, got: 
> prod.timelineservice.domain.: 20 times,
> at 
> org.apache.hadoop.yarn.server.timelineservice.collector.TimelineCollectorWebService.putEntities(TimelineCollectorWebService.java:197)
> at sun.reflect.GeneratedMethodAccessor46.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60)
> at 
> 

[jira] [Updated] (YARN-8499) ATS v2 should handle connection issues in general for all storages

2019-03-18 Thread Prabhu Joseph (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8499?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph updated YARN-8499:

Attachment: YARN-8499-006.patch

> ATS v2 should handle connection issues in general for all storages
> --
>
> Key: YARN-8499
> URL: https://issues.apache.org/jira/browse/YARN-8499
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: ATSv2
>Reporter: Sunil Govindan
>Assignee: Prabhu Joseph
>Priority: Major
>  Labels: atsv2
> Attachments: YARN-8499-001.patch, YARN-8499-002.patch, 
> YARN-8499-003.patch, YARN-8499-004.patch, YARN-8499-005.patch, 
> YARN-8499-006.patch
>
>
> Post YARN-8302, Hbase connection issues are handled in ATSv2. However this 
> could be made general by introducing an api in storage interface and 
> implementing in each of the storage as per the store semantics.
>  
> cc [~rohithsharma] [~vinodkv] [~vrushalic]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8148) Update decimal values for queue capacities shown on queue status cli

2019-03-18 Thread Prabhu Joseph (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8148?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph updated YARN-8148:

Attachment: YARN-8148-002.patch

> Update decimal values for queue capacities shown on queue status cli
> 
>
> Key: YARN-8148
> URL: https://issues.apache.org/jira/browse/YARN-8148
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: client
>Affects Versions: 3.0.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: YARN-8148-002.patch, YARN-8148.1.patch
>
>
> Capacities are shown with two decimal values in RM UI as part of YARN-6182. 
> The queue status cli are still showing one decimal value.
> {code}
> [root@bigdata3 yarn]# yarn queue -status default
> Queue Information : 
> Queue Name : default
>   State : RUNNING
>   Capacity : 69.9%
>   Current Capacity : .0%
>   Maximum Capacity : 70.0%
>   Default Node Label expression : 
>   Accessible Node Labels : *
>   Preemption : enabled
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9387) ATS HBase Custom tablenames (-entityTableName) not honored at read / write

2019-03-18 Thread Prabhu Joseph (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph updated YARN-9387:

Attachment: (was: YARN-9387-001.patch)

> ATS HBase Custom tablenames (-entityTableName) not honored at read / write
> --
>
> Key: YARN-9387
> URL: https://issues.apache.org/jira/browse/YARN-9387
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: ATSv2
>Affects Versions: 3.1.2, 3.3.0, 3.2.1
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Critical
> Attachments: Screen Shot 2019-03-15 at 1.21.21 PM.png, 
> YARN-9387-001.patch
>
>
> {{HbaseTimelineSchemaCreator}} provides option to provide custom table name 
> and it creates properly. But The {{HBaseTimelineWriterImpl / 
> HBaseTimelineReaderImpl}} does not know the custom name and uses the table 
> with default name leading to data loss.
> NM {{TimelineCollector}} inserts to default table name 
> '{{prod.timelineservice.entity' }} which won;t be exist.
> {code}
> 2019-03-14 15:37:10,739 WARN 
> org.apache.hadoop.yarn.webapp.GenericExceptionHandler: INTERNAL_SERVER_ERROR
> javax.ws.rs.WebApplicationException: 
> org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 
> 20 actions: Table 'prod.timelineservice.entity' was not found, got: 
> prod.timelineservice.domain.: 20 times,
> at 
> org.apache.hadoop.yarn.server.timelineservice.collector.TimelineCollectorWebService.putEntities(TimelineCollectorWebService.java:197)
> at sun.reflect.GeneratedMethodAccessor46.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$ResponseOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:205)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
> at 
> com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:302)
> at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
> at 
> com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
> at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
> at 
> com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1542)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1473)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1419)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1409)
> at 
> com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:409)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9357) HBaseTimelineReaderImpl storage monitor log level change

2019-03-18 Thread Prabhu Joseph (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9357?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph updated YARN-9357:

Attachment: YARN-9357-002.patch

> HBaseTimelineReaderImpl storage monitor log level change
> 
>
> Key: YARN-9357
> URL: https://issues.apache.org/jira/browse/YARN-9357
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: ATSv2
>Affects Versions: 3.2.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Minor
> Attachments: YARN-9357-001.patch, YARN-9357-002.patch
>
>
> HBaseTimelineReaderImpl storage monitor logs below every minute. Has to be 
> changed to DEBUG level.
> {code}
> 2019-03-07 13:48:28,764 INFO 
> org.apache.hadoop.yarn.server.timelineservice.storage.HBaseTimelineReaderImpl:
>  Running HBase liveness monitor
> 2019-03-07 13:49:28,763 INFO 
> org.apache.hadoop.yarn.server.timelineservice.storage.HBaseTimelineReaderImpl:
>  Running HBase liveness monitor
> 2019-03-07 13:50:28,764 INFO 
> org.apache.hadoop.yarn.server.timelineservice.storage.HBaseTimelineReaderImpl:
>  Running HBase liveness monitor
> 2019-03-07 13:51:28,764 INFO 
> org.apache.hadoop.yarn.server.timelineservice.storage.HBaseTimelineReaderImpl:
>  Running HBase liveness monitor
> 2019-03-07 13:52:28,763 INFO 
> org.apache.hadoop.yarn.server.timelineservice.storage.HBaseTimelineReaderImpl:
>  Running HBase liveness monitor
> 2019-03-07 13:53:28,763 INFO 
> org.apache.hadoop.yarn.server.timelineservice.storage.HBaseTimelineReaderImpl:
>  Running HBase liveness monitor
> 2019-03-07 13:54:28,763 INFO 
> org.apache.hadoop.yarn.server.timelineservice.storage.HBaseTimelineReaderImpl:
>  Running HBase liveness monitor
> 2019-03-07 13:55:28,764 INFO 
> org.apache.hadoop.yarn.server.timelineservice.storage.HBaseTimelineReaderImpl:
>  Running HBase liveness monitor
> 2019-03-07 13:56:28,764 INFO 
> org.apache.hadoop.yarn.server.timelineservice.storage.HBaseTimelineReaderImpl:
>  Running HBase liveness monitor
> 2019-03-07 13:57:28,764 INFO 
> org.apache.hadoop.yarn.server.timelineservice.storage.HBaseTimelineReaderImpl:
>  Running HBase liveness monitor
> 2019-03-07 13:58:28,764 INFO 
> org.apache.hadoop.yarn.server.timelineservice.storage.HBaseTimelineReaderImpl:
>  Running HBase liveness monitor
> 2019-03-07 13:59:28,764 INFO 
> org.apache.hadoop.yarn.server.timelineservice.storage.HBaseTimelineReaderImpl:
>  Running HBase liveness monitor
> 2019-03-07 14:00:28,764 INFO 
> org.apache.hadoop.yarn.server.timelineservice.storage.HBaseTimelineReaderImpl:
>  Running HBase liveness monitor
> 2019-03-07 14:01:28,763 INFO 
> org.apache.hadoop.yarn.server.timelineservice.storage.HBaseTimelineReaderImpl:
>  Running HBase liveness monitor
> 2019-03-07 14:02:28,764 INFO 
> org.apache.hadoop.yarn.server.timelineservice.storage.HBaseTimelineReaderImpl:
>  Running HBase liveness monitor
> 2019-03-07 14:03:28,764 INFO 
> org.apache.hadoop.yarn.server.timelineservice.storage.HBaseTimelineReaderImpl:
>  Running HBase liveness monitor
> 2019-03-07 14:04:28,764 INFO 
> org.apache.hadoop.yarn.server.timelineservice.storage.HBaseTimelineReaderImpl:
>  Running HBase liveness monitor
> 2019-03-07 14:05:28,764 INFO 
> org.apache.hadoop.yarn.server.timelineservice.storage.HBaseTimelineReaderImpl:
>  Running HBase liveness monitor
> 2019-03-07 14:06:28,764 INFO 
> org.apache.hadoop.yarn.server.timelineservice.storage.HBaseTimelineReaderImpl:
>  Running HBase liveness monitor
> 2019-03-07 14:07:28,764 INFO 
> org.apache.hadoop.yarn.server.timelineservice.storage.HBaseTimelineReaderImpl:
>  Running HBase liveness monitor
> 2019-03-07 14:08:28,764 INFO 
> org.apache.hadoop.yarn.server.timelineservice.storage.HBaseTimelineReaderImpl:
>  Running HBase liveness monitor
> 2019-03-07 14:09:28,764 INFO 
> org.apache.hadoop.yarn.server.timelineservice.storage.HBaseTimelineReaderImpl:
>  Running HBase liveness monitor
> 2019-03-07 14:10:28,764 INFO 
> org.apache.hadoop.yarn.server.timelineservice.storage.HBaseTimelineReaderImpl:
>  Running HBase liveness monitor
> 2019-03-07 14:11:28,764 INFO 
> org.apache.hadoop.yarn.server.timelineservice.storage.HBaseTimelineReaderImpl:
>  Running HBase liveness monitor
> 2019-03-07 14:12:28,764 INFO 
> org.apache.hadoop.yarn.server.timelineservice.storage.HBaseTimelineReaderImpl:
>  Running HBase liveness monitor
> 2019-03-07 14:13:28,763 INFO 
> org.apache.hadoop.yarn.server.timelineservice.storage.HBaseTimelineReaderImpl:
>  Running HBase liveness monitor
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9373) HBaseTimelineSchemaCreator has to allow user to configure pre-splits

2019-03-18 Thread Prabhu Joseph (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9373?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph updated YARN-9373:

Attachment: YARN-9373-002.patch

> HBaseTimelineSchemaCreator has to allow user to configure pre-splits
> 
>
> Key: YARN-9373
> URL: https://issues.apache.org/jira/browse/YARN-9373
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: ATSv2
>Affects Versions: 3.2.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: Configurable_PreSplits.png, YARN-9373-001.patch, 
> YARN-9373-002.patch
>
>
> Most of the TimelineService HBase tables is set with username splits which is 
> based on lowercase alphabet (a,ad,an,b,ca). This won't help if the rowkey 
> starts with either number or uppercase alphabet. We need to allow user to 
> configure based upon their data. For example, say a user has configured the 
> yarn.resourcemanager.cluster-id to be ATS or 123, then the splits can be 
> configured as A,B,C,,, or 100,200,300,,,



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9373) HBaseTimelineSchemaCreator has to allow user to configure pre-splits

2019-03-18 Thread Prabhu Joseph (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9373?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph updated YARN-9373:

Attachment: (was: YARN-9373-002.patch)

> HBaseTimelineSchemaCreator has to allow user to configure pre-splits
> 
>
> Key: YARN-9373
> URL: https://issues.apache.org/jira/browse/YARN-9373
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: ATSv2
>Affects Versions: 3.2.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: Configurable_PreSplits.png, YARN-9373-001.patch
>
>
> Most of the TimelineService HBase tables is set with username splits which is 
> based on lowercase alphabet (a,ad,an,b,ca). This won't help if the rowkey 
> starts with either number or uppercase alphabet. We need to allow user to 
> configure based upon their data. For example, say a user has configured the 
> yarn.resourcemanager.cluster-id to be ATS or 123, then the splits can be 
> configured as A,B,C,,, or 100,200,300,,,



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9387) ATS HBase Custom tablenames (-entityTableName) not honored at read / write

2019-03-18 Thread Prabhu Joseph (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph updated YARN-9387:

Attachment: YARN-9387-001.patch

> ATS HBase Custom tablenames (-entityTableName) not honored at read / write
> --
>
> Key: YARN-9387
> URL: https://issues.apache.org/jira/browse/YARN-9387
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: ATSv2
>Affects Versions: 3.1.2, 3.3.0, 3.2.1
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Critical
> Attachments: Screen Shot 2019-03-15 at 1.21.21 PM.png, 
> YARN-9387-001.patch
>
>
> {{HbaseTimelineSchemaCreator}} provides option to provide custom table name 
> and it creates properly. But The {{HBaseTimelineWriterImpl / 
> HBaseTimelineReaderImpl}} does not know the custom name and uses the table 
> with default name leading to data loss.
> NM {{TimelineCollector}} inserts to default table name 
> '{{prod.timelineservice.entity' }} which won;t be exist.
> {code}
> 2019-03-14 15:37:10,739 WARN 
> org.apache.hadoop.yarn.webapp.GenericExceptionHandler: INTERNAL_SERVER_ERROR
> javax.ws.rs.WebApplicationException: 
> org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 
> 20 actions: Table 'prod.timelineservice.entity' was not found, got: 
> prod.timelineservice.domain.: 20 times,
> at 
> org.apache.hadoop.yarn.server.timelineservice.collector.TimelineCollectorWebService.putEntities(TimelineCollectorWebService.java:197)
> at sun.reflect.GeneratedMethodAccessor46.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$ResponseOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:205)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
> at 
> com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:302)
> at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
> at 
> com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
> at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
> at 
> com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1542)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1473)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1419)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1409)
> at 
> com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:409)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9394) Use new API of RackResolver to get better performance

2019-03-18 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9394?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16794828#comment-16794828
 ] 

Hadoop QA commented on YARN-9394:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
31s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 12s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 55s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 26m  7s{color} 
| {color:red} hadoop-yarn-client in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 80m 12s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.client.api.impl.TestAMRMClientContainerRequest |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | YARN-9394 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12962775/YARN-9394.001.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux f62b439f7258 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 926d548 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/23727/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/23727/testReport/ |
| Max. process+thread count | 682 (vs. ulimit of 1) |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client U: 

[jira] [Commented] (YARN-9394) Use new API of RackResolver to get better performance

2019-03-18 Thread Lantao Jin (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9394?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16794796#comment-16794796
 ] 

Lantao Jin commented on YARN-9394:
--

Hi, [~cheersyang], this is a followup of 
[YARN-9332|https://issues.apache.org/jira/browse/YARN-9332]. It could benefit 
the caller from AMRMClient.
Besides, MapReduce also has some invokers like 
https://github.com/apache/hadoop/blob/e97acb3bd8f3befd27418996fa5d4b50bf2e17bf/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/job/impl/TaskAttemptImpl.java#L686
 and 
https://github.com/apache/hadoop/blob/e97acb3bd8f3befd27418996fa5d4b50bf2e17bf/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/rm/RMContainerAllocator.java#L1444

> Use new API of RackResolver to get better performance
> -
>
> Key: YARN-9394
> URL: https://issues.apache.org/jira/browse/YARN-9394
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: yarn
>Affects Versions: 3.3.0, 3.2.1
>Reporter: Lantao Jin
>Assignee: Lantao Jin
>Priority: Major
> Attachments: YARN-9394.001.patch
>
>
> After adding a new API in RackResolver YARN-9332, some old callers should 
> switch to new API to get better performance. As an example, Spark 
> [YarnAllocator|https://github.com/apache/spark/blob/733f2c0b98208815f8408e36ab669d7c07e3767f/resource-managers/yarn/src/main/scala/org/apache/spark/deploy/yarn/YarnAllocator.scala#L361-L363]
>  for Dynamic Allocation invokes 
> [https://github.com/apache/hadoop/blob/6fa229891e06eea62cb9634efde755f40247e816/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/impl/AMRMClientImpl.java#L550]
>  to resolve racks in a loop.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9394) Use new API of RackResolver to get better performance

2019-03-18 Thread Lantao Jin (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9394?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lantao Jin updated YARN-9394:
-
Attachment: YARN-9394.001.patch

> Use new API of RackResolver to get better performance
> -
>
> Key: YARN-9394
> URL: https://issues.apache.org/jira/browse/YARN-9394
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: yarn
>Affects Versions: 3.3.0, 3.2.1
>Reporter: Lantao Jin
>Assignee: Lantao Jin
>Priority: Major
> Attachments: YARN-9394.001.patch
>
>
> After adding a new API in RackResolver YARN-9332, some old callers should 
> switch to new API to get better performance. As an example, Spark 
> [YarnAllocator|https://github.com/apache/spark/blob/733f2c0b98208815f8408e36ab669d7c07e3767f/resource-managers/yarn/src/main/scala/org/apache/spark/deploy/yarn/YarnAllocator.scala#L361-L363]
>  for Dynamic Allocation invokes 
> [https://github.com/apache/hadoop/blob/6fa229891e06eea62cb9634efde755f40247e816/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/impl/AMRMClientImpl.java#L550]
>  to resolve racks in a loop.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9278) Shuffle nodes when selecting to be preempted nodes

2019-03-18 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16794783#comment-16794783
 ] 

Hadoop QA commented on YARN-9278:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
49s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 17s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 50s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 79m 
27s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
25s{color} | {color:green} hadoop-yarn-site in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
44s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}149m 50s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | YARN-9278 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12962761/YARN-9278.003.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  

[jira] [Created] (YARN-9395) Short Names for repeated Hbase Column names

2019-03-18 Thread Prabhu Joseph (JIRA)
Prabhu Joseph created YARN-9395:
---

 Summary: Short Names for repeated Hbase Column names
 Key: YARN-9395
 URL: https://issues.apache.org/jira/browse/YARN-9395
 Project: Hadoop YARN
  Issue Type: New Feature
  Components: ATSv2
Affects Versions: 3.2.0
Reporter: Prabhu Joseph
Assignee: Prabhu Joseph


Currently ATS HBase tables stores the config name / metric name as column names 
which are long. This repeats for all the rows and consumes lot of storage 
space. And we have seen Customers Hbase Tables already consumes more than 1.5 
TB in few days

{code}
Example Configs:
c:yarn.timeline-service.webapp.rest-csrf.methods-to-ignore
c:yarn.timeline-service.entity-group-fs-store.active-dir
c:yarn.scheduler.configuration.zk-store.parent-path

Example Metrics:
m:REDUCE:org.apache.hadoop.mapreduce.FileSystemCounter:HDFS_READ_OPS
m:REDUCE:org.apache.hadoop.mapreduce.TaskCounter:COMBINE_INPUT_RECORDS
m:REDUCE:org.apache.hadoop.mapreduce.TaskCounter:PHYSICAL_MEMORY_BYTES
{code}

We need to use short column names as per Hbase Best Practice - 
http://moi.vonos.net/bigdata/avro-hbase-colnames/ But the challenge is ATS does 
not know the column names until the rows get inserted. We can provide a mapping 
file to map the repeated configs / metrics / info from different applications 
to unique numbers which customers can configure upfront to save the storage 
space. Similar to what Phoenix does

https://blogs.apache.org/phoenix/entry/column-mapping-and-immutable-data
https://phoenix.apache.org/columnencoding.html




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-9394) Use new API of RackResolver to get better performance

2019-03-18 Thread Lantao Jin (JIRA)
Lantao Jin created YARN-9394:


 Summary: Use new API of RackResolver to get better performance
 Key: YARN-9394
 URL: https://issues.apache.org/jira/browse/YARN-9394
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: yarn
Affects Versions: 3.3.0, 3.2.1
Reporter: Lantao Jin
Assignee: Lantao Jin


After adding a new API in RackResolver YARN-9332, some old callers should 
switch to new API to get better performance. As an example, Spark 
[YarnAllocator|https://github.com/apache/spark/blob/733f2c0b98208815f8408e36ab669d7c07e3767f/resource-managers/yarn/src/main/scala/org/apache/spark/deploy/yarn/YarnAllocator.scala#L361-L363]
 for Dynamic Allocation invokes 
[https://github.com/apache/hadoop/blob/6fa229891e06eea62cb9634efde755f40247e816/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/impl/AMRMClientImpl.java#L550]
 to resolve racks in a loop.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org