[jira] [Commented] (YARN-9615) Add dispatcher metrics to RM
[ https://issues.apache.org/jira/browse/YARN-9615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16868266#comment-16868266 ] Bibin A Chundatt commented on YARN-9615: Thank you [~jhung] adding the poc * We could go for a generic approach for each type of event types {code} public class GenericEventTypeMetrics> implements EventTypeMetrics { private final EnumMap metrics; private final MetricsRegistry registry; private final MetricsSystem ms; private final MetricsInfo info; private final Class enumClass; public GenericEventTypeMetrics(MetricsInfo info, MetricsSystem ms, T[] enums, Class enumClass) { this.enumClass = enumClass; this.metrics = new EnumMap<>(this.enumClass); this.ms = ms; this.info = info; this.registry = new MetricsRegistry(this.info); //Initialze enum for (T type : enums) { String metricsName = type.toString().toLowerCase() + "_" + info.description(); metrics.put(type, this.registry.newGauge(metricsName, metricsName, 0L)); } } } {code} *Interface* {code} public interface EventTypeMetrics> extends MetricsSource { public void incr(T type); public void decr(T type); public long get(T type); public EventTypeMetrics register(); public void unregister(); } {code} *Initialize each type of metrics using* {code} this.appEventTypeMetrics = new GenericEventTypeMetrics.EventTypeMetricsBuilder() .setMetricsInfo(RMAPPEEVENTTYPE_RECORD_INFO).setMetricsSystem(ms) .setEnumVals(RMAppEventType.values()) .setEnumClass(RMAppEventType.class).build(); {code} * Then above design avoid creating individual class for each type of events * Add event metrics manager which creates the eventtypemetrics instance based on configuration to enable dispatcher metrics * Incase event type metrics conf is not enabled assign with {{DisableEventTypeMetrics}} implementing {{EventTypeMetrics}} which does nothing. > Add dispatcher metrics to RM > > > Key: YARN-9615 > URL: https://issues.apache.org/jira/browse/YARN-9615 > Project: Hadoop YARN > Issue Type: Task >Reporter: Jonathan Hung >Assignee: Jonathan Hung >Priority: Major > Attachments: YARN-9615.poc.patch, screenshot-1.png > > > It'd be good to have counts/processing times for each event type in RM async > dispatcher and scheduler async dispatcher. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-9596) QueueMetrics has incorrect metrics when labelled partitions are involved
[ https://issues.apache.org/jira/browse/YARN-9596?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16868264#comment-16868264 ] Manikandan R commented on YARN-9596: [~samkhan] [~eepayne] Sorry for the delay. Have been trying to understand this issue with respect to YARN-9088 as well and from other metrics perspective. Will post update asap. In the meantime, can you also take a look at those? > QueueMetrics has incorrect metrics when labelled partitions are involved > > > Key: YARN-9596 > URL: https://issues.apache.org/jira/browse/YARN-9596 > Project: Hadoop YARN > Issue Type: Bug > Components: capacity scheduler >Affects Versions: 2.8.0, 3.3.0 >Reporter: Muhammad Samir Khan >Assignee: Muhammad Samir Khan >Priority: Major > Attachments: Screen Shot 2019-06-03 at 4.41.45 PM.png, Screen Shot > 2019-06-03 at 4.44.15 PM.png, YARN-9596.001.patch, YARN-9596.002.patch > > > After YARN-6467, QueueMetrics should only be tracking metrics for the default > partition. However, the metrics are incorrect when labelled partitions are > involved. > Steps to reproduce > == > # Configure capacity-scheduler.xml with label configuration > # Add label "test" to cluster and replace label on node1 to be "test" > # Note down "totalMB" at > /ws/v1/cluster/metrics > # Start first job on test queue. > # Start second job on default queue (does not work if the order of two jobs > is swapped). > # While the two applications are running, the "totalMB" at > /ws/v1/cluster/metrics will go down by > the amount of MB used by the first job (screenshots attached). > Alternately: > In > TestNodeLabelContainerAllocation.testQueueMetricsWithLabelsOnDefaultLabelNode(), > add the following line at the end of the test before rm1.close(): > CSQueue rootQueue = cs.getRootQueue(); > assertEquals(10*GB, > rootQueue.getMetrics().getAvailableMB() + > rootQueue.getMetrics().getAllocatedMB()); > There are two nodes of 10GB each and only one of them have a non-default > label. The test will also fail against 20*GB check. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-9560) Restructure DockerLinuxContainerRuntime to extend a new OCIContainerRuntime
[ https://issues.apache.org/jira/browse/YARN-9560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16868240#comment-16868240 ] Hadoop QA commented on YARN-9560: - | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 33s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 7s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 8s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 31s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 45s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 23s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 6s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 28s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 8s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 8s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 23s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager: The patch generated 4 new + 21 unchanged - 3 fixed = 25 total (was 24) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 56s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 30s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 21m 56s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 31s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 77m 2s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=18.09.5 Server=18.09.5 Image:yetus/hadoop:bdbca0e53b4 | | JIRA Issue | YARN-9560 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12972280/YARN-9560.007.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 7b9362145dfb 4.15.0-48-generic #51-Ubuntu SMP Wed Apr 3 08:28:49 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 5bfdf62 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_212 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-YARN-Build/24290/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/24290/testReport/ | | Max. process+thread count | 307 (vs. ulimit of 5500) | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager U:
[jira] [Commented] (YARN-8995) Log the event type of the too big AsyncDispatcher event queue size, and add the information to the metrics.
[ https://issues.apache.org/jira/browse/YARN-8995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16868197#comment-16868197 ] Tao Yang commented on YARN-8995: Hi, [~zhuqi]. You should update the patch based on trunk, instead of moving back updates about AsyncDispatcher#LOG in trunk. In another word, the patch should be well applied for trunk. There are still two unnecessary empty lines below line "eventHandlingThread.start();" and "YarnConfiguration.DEFAULT_DISPATCHER_PRINT_EVENTS_INFO_INTERVAL_IN_THOUSANDS) * 1000;". > Log the event type of the too big AsyncDispatcher event queue size, and add > the information to the metrics. > > > Key: YARN-8995 > URL: https://issues.apache.org/jira/browse/YARN-8995 > Project: Hadoop YARN > Issue Type: Improvement > Components: metrics, nodemanager, resourcemanager >Affects Versions: 3.2.0 >Reporter: zhuqi >Assignee: zhuqi >Priority: Major > Attachments: TestStreamPerf.java, YARN-8995.001.patch, > YARN-8995.002.patch, YARN-8995.003.patch, YARN-8995.004.patch, > YARN-8995.005.patch, YARN-8995.006.patch > > > In our growing cluster,there are unexpected situations that cause some event > queues to block the performance of the cluster, such as the bug of > https://issues.apache.org/jira/browse/YARN-5262 . I think it's necessary to > log the event type of the too big event queue size, and add the information > to the metrics, and the threshold of queue size is a parametor which can be > changed. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-9560) Restructure DockerLinuxContainerRuntime to extend a new OCIContainerRuntime
[ https://issues.apache.org/jira/browse/YARN-9560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16868189#comment-16868189 ] Eric Badger commented on YARN-9560: --- Attached patch 007 that moves a bunch of the configs back into {{DockerLinuxContainerRuntime}} since we are choosing to have separate configs for Docker and Runc. My ideal final product would be to have the subclassed runtimes define variables (such as {{allowedRuntimes}}, and then have {{OCIContainerRuntime}} define the implementation of the methods that use them. However, at this time that isn't possible, since there will be enough difference between Docker and Runc. So for the time being, I've put the Docker-related configs in Docker if the implementation of the methods that use them in their current state cannot be shared between the runtimes. I also added a formatted string type for the shared ENV variables in {{OCIContainerRuntime}} that will be defined by {{envConfigType}} which is defined by the subclassed runtimes. I went back and forth on a few different approaches before landing on this implementation. So your feedback is appreciated. cc [~Jim_Brennan], [~eyang], [~ccondit] > Restructure DockerLinuxContainerRuntime to extend a new OCIContainerRuntime > --- > > Key: YARN-9560 > URL: https://issues.apache.org/jira/browse/YARN-9560 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Eric Badger >Assignee: Eric Badger >Priority: Major > Labels: Docker > Attachments: YARN-9560.001.patch, YARN-9560.002.patch, > YARN-9560.003.patch, YARN-9560.004.patch, YARN-9560.005.patch, > YARN-9560.006.patch, YARN-9560.007.patch > > > Since the new OCI/squashFS/runc runtime will be using a lot of the same code > as DockerLinuxContainerRuntime, it would be good to move a bunch of the > DockerLinuxContainerRuntime code up a level to an abstract class that both of > the runtimes can extend. > The new structure will look like: > {noformat} > OCIContainerRuntime (abstract class) > - DockerLinuxContainerRuntime > - FSImageContainerRuntime (name negotiable) > {noformat} > This JIRA should only change the structure of the code, not the actual > semantics -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-9560) Restructure DockerLinuxContainerRuntime to extend a new OCIContainerRuntime
[ https://issues.apache.org/jira/browse/YARN-9560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eric Badger updated YARN-9560: -- Attachment: YARN-9560.007.patch > Restructure DockerLinuxContainerRuntime to extend a new OCIContainerRuntime > --- > > Key: YARN-9560 > URL: https://issues.apache.org/jira/browse/YARN-9560 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Eric Badger >Assignee: Eric Badger >Priority: Major > Labels: Docker > Attachments: YARN-9560.001.patch, YARN-9560.002.patch, > YARN-9560.003.patch, YARN-9560.004.patch, YARN-9560.005.patch, > YARN-9560.006.patch, YARN-9560.007.patch > > > Since the new OCI/squashFS/runc runtime will be using a lot of the same code > as DockerLinuxContainerRuntime, it would be good to move a bunch of the > DockerLinuxContainerRuntime code up a level to an abstract class that both of > the runtimes can extend. > The new structure will look like: > {noformat} > OCIContainerRuntime (abstract class) > - DockerLinuxContainerRuntime > - FSImageContainerRuntime (name negotiable) > {noformat} > This JIRA should only change the structure of the code, not the actual > semantics -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-9615) Add dispatcher metrics to RM
[ https://issues.apache.org/jira/browse/YARN-9615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16868109#comment-16868109 ] Hadoop QA commented on YARN-9615: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 33s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 35s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 4s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 46s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 54s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 53s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 23s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 56s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 1m 9s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch generated 101 new + 45 unchanged - 0 fixed = 146 total (was 45) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 52s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 24s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 16s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 44s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 80m 45s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 31s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}160m 16s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager | | | Switch statement found in org.apache.hadoop.yarn.server.resourcemanager.RMAsyncDispatcherMetrics.incrementEventType(Event, long) where default case is missing At RMAsyncDispatcherMetrics.java:long) where default case is missing At RMAsyncDispatcherMetrics.java:[lines 81-88] | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce
[jira] [Commented] (YARN-9559) Create AbstractContainersLauncher for pluggable ContainersLauncher logic
[ https://issues.apache.org/jira/browse/YARN-9559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16868103#comment-16868103 ] Hadoop QA commented on YARN-9559: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 14m 39s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 52s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 10s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 17s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 31s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 5s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 17m 59s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 3s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 26s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 17s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 13m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 24s{color} | {color:green} hadoop-yarn-project/hadoop-yarn: The patch generated 0 new + 318 unchanged - 2 fixed = 318 total (was 320) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 22s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 35s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 5s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 4m 13s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 22m 43s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 1m 20s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}150m 38s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:bdbca0e | | JIRA Issue | YARN-9559 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12968964/YARN-9559.003.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs
[jira] [Commented] (YARN-9631) hadoop-yarn-applications-catalog-webapp doesn't respect mvn test -D parameter
[ https://issues.apache.org/jira/browse/YARN-9631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16868067#comment-16868067 ] Hudson commented on YARN-9631: -- FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #16793 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/16793/]) YARN-9631. Added ability to select JavaScript test or skip JavaScript (eyang: rev 5bfdf62614735e09b67d6c70a0db4e0dbd2743b2) * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-catalog/hadoop-yarn-applications-catalog-webapp/pom.xml > hadoop-yarn-applications-catalog-webapp doesn't respect mvn test -D parameter > - > > Key: YARN-9631 > URL: https://issues.apache.org/jira/browse/YARN-9631 > Project: Hadoop YARN > Issue Type: Improvement >Affects Versions: 3.3.0 > Environment: > >Reporter: Wei-Chiu Chuang >Assignee: Eric Yang >Priority: Major > Attachments: YARN-9631.001.patch, YARN-9631.002.patch, > YARN-9631.003.patch > > > When I run a HDFS test {{mvn test -Dtest=TestMaintenanceState}} from the > Hadoop source code root, hadoop-yarn-applications-catalog-webapp doesn't > respect the test filter and always runs its tests. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-9631) hadoop-yarn-applications-catalog-webapp doesn't respect mvn test -D parameter
[ https://issues.apache.org/jira/browse/YARN-9631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16868062#comment-16868062 ] Wei-Chiu Chuang commented on YARN-9631: --- Spoke with [~eyang] offline, the approach is good +1. There is a problem where the Jetty server starts but cannot be connected, which blocks. But Eric verified this is not an issue on other Mac/Linux machines, so most likely a local configuration problem (firewall or something) > hadoop-yarn-applications-catalog-webapp doesn't respect mvn test -D parameter > - > > Key: YARN-9631 > URL: https://issues.apache.org/jira/browse/YARN-9631 > Project: Hadoop YARN > Issue Type: Improvement >Affects Versions: 3.3.0 > Environment: > >Reporter: Wei-Chiu Chuang >Assignee: Eric Yang >Priority: Major > Attachments: YARN-9631.001.patch, YARN-9631.002.patch, > YARN-9631.003.patch > > > When I run a HDFS test {{mvn test -Dtest=TestMaintenanceState}} from the > Hadoop source code root, hadoop-yarn-applications-catalog-webapp doesn't > respect the test filter and always runs its tests. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-9596) QueueMetrics has incorrect metrics when labelled partitions are involved
[ https://issues.apache.org/jira/browse/YARN-9596?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16868027#comment-16868027 ] Eric Payne commented on YARN-9596: -- [~samkhan], these changes look good to me for trunk. Do they backport to branch-2? > QueueMetrics has incorrect metrics when labelled partitions are involved > > > Key: YARN-9596 > URL: https://issues.apache.org/jira/browse/YARN-9596 > Project: Hadoop YARN > Issue Type: Bug > Components: capacity scheduler >Affects Versions: 2.8.0, 3.3.0 >Reporter: Muhammad Samir Khan >Assignee: Muhammad Samir Khan >Priority: Major > Attachments: Screen Shot 2019-06-03 at 4.41.45 PM.png, Screen Shot > 2019-06-03 at 4.44.15 PM.png, YARN-9596.001.patch, YARN-9596.002.patch > > > After YARN-6467, QueueMetrics should only be tracking metrics for the default > partition. However, the metrics are incorrect when labelled partitions are > involved. > Steps to reproduce > == > # Configure capacity-scheduler.xml with label configuration > # Add label "test" to cluster and replace label on node1 to be "test" > # Note down "totalMB" at > /ws/v1/cluster/metrics > # Start first job on test queue. > # Start second job on default queue (does not work if the order of two jobs > is swapped). > # While the two applications are running, the "totalMB" at > /ws/v1/cluster/metrics will go down by > the amount of MB used by the first job (screenshots attached). > Alternately: > In > TestNodeLabelContainerAllocation.testQueueMetricsWithLabelsOnDefaultLabelNode(), > add the following line at the end of the test before rm1.close(): > CSQueue rootQueue = cs.getRootQueue(); > assertEquals(10*GB, > rootQueue.getMetrics().getAvailableMB() + > rootQueue.getMetrics().getAllocatedMB()); > There are two nodes of 10GB each and only one of them have a non-default > label. The test will also fail against 20*GB check. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-9631) hadoop-yarn-applications-catalog-webapp doesn't respect mvn test -D parameter
[ https://issues.apache.org/jira/browse/YARN-9631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16868026#comment-16868026 ] Eric Yang commented on YARN-9631: - To skip all tests: {code}mvn package -DskipTests{code} The result would be: {code} [INFO] --- maven-surefire-plugin:3.0.0-M1:test (default-test) @ hadoop-yarn-applications-catalog-webapp --- [INFO] Tests are skipped. [INFO] [INFO] --- jasmine-maven-plugin:2.1:test (default) @ hadoop-yarn-applications-catalog-webapp --- [INFO] Skipping Jasmine Specs {code} There is no reason to use -Dtest and -DskipTests parameters together because they contradicts each other, and no test will be performed. > hadoop-yarn-applications-catalog-webapp doesn't respect mvn test -D parameter > - > > Key: YARN-9631 > URL: https://issues.apache.org/jira/browse/YARN-9631 > Project: Hadoop YARN > Issue Type: Improvement >Affects Versions: 3.3.0 > Environment: > >Reporter: Wei-Chiu Chuang >Assignee: Eric Yang >Priority: Major > Attachments: YARN-9631.001.patch, YARN-9631.002.patch, > YARN-9631.003.patch > > > When I run a HDFS test {{mvn test -Dtest=TestMaintenanceState}} from the > Hadoop source code root, hadoop-yarn-applications-catalog-webapp doesn't > respect the test filter and always runs its tests. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-9631) hadoop-yarn-applications-catalog-webapp doesn't respect mvn test -D parameter
[ https://issues.apache.org/jira/browse/YARN-9631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16868018#comment-16868018 ] Eric Yang commented on YARN-9631: - [~jojochuang] Javascript tests can be skipped if there is no matching test case filename with -Dtest property by using: {code} mvn test -Dtest=TestMaintenanceState {code} The resulting output is: {code} --- J A S M I N E S P E C S --- [INFO] Results: 0 specs, 0 failures, 0 pending {code} I don't think -DskipTests option needs to be specified, unless you wish to skip all tests. > hadoop-yarn-applications-catalog-webapp doesn't respect mvn test -D parameter > - > > Key: YARN-9631 > URL: https://issues.apache.org/jira/browse/YARN-9631 > Project: Hadoop YARN > Issue Type: Improvement >Affects Versions: 3.3.0 > Environment: > >Reporter: Wei-Chiu Chuang >Assignee: Eric Yang >Priority: Major > Attachments: YARN-9631.001.patch, YARN-9631.002.patch, > YARN-9631.003.patch > > > When I run a HDFS test {{mvn test -Dtest=TestMaintenanceState}} from the > Hadoop source code root, hadoop-yarn-applications-catalog-webapp doesn't > respect the test filter and always runs its tests. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-9615) Add dispatcher metrics to RM
[ https://issues.apache.org/jira/browse/YARN-9615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16868001#comment-16868001 ] Jonathan Hung commented on YARN-9615: - Thanks [~bibinchundatt], I just attached YARN-9615.poc.patch, it would be great if you could take a look. It adds a couple metrics for RM async dispatcher + scheduler dispatcher events. If your approach is similar, perhaps we can use this patch as a basis. If not , I'd be interested to see what your implementation looks like. > Add dispatcher metrics to RM > > > Key: YARN-9615 > URL: https://issues.apache.org/jira/browse/YARN-9615 > Project: Hadoop YARN > Issue Type: Task >Reporter: Jonathan Hung >Assignee: Jonathan Hung >Priority: Major > Attachments: YARN-9615.poc.patch, screenshot-1.png > > > It'd be good to have counts/processing times for each event type in RM async > dispatcher and scheduler async dispatcher. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-9615) Add dispatcher metrics to RM
[ https://issues.apache.org/jira/browse/YARN-9615?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Hung updated YARN-9615: Attachment: YARN-9615.poc.patch > Add dispatcher metrics to RM > > > Key: YARN-9615 > URL: https://issues.apache.org/jira/browse/YARN-9615 > Project: Hadoop YARN > Issue Type: Task >Reporter: Jonathan Hung >Assignee: Jonathan Hung >Priority: Major > Attachments: YARN-9615.poc.patch, screenshot-1.png > > > It'd be good to have counts/processing times for each event type in RM async > dispatcher and scheduler async dispatcher. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-9631) hadoop-yarn-applications-catalog-webapp doesn't respect mvn test -D parameter
[ https://issues.apache.org/jira/browse/YARN-9631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16867915#comment-16867915 ] Wei-Chiu Chuang commented on YARN-9631: --- Thank you Eric! It seems like with this patch I would still need to specify the following in order to filter tests and skip the javascript tests. {code:java} mvn test -Dtest=TestMaintenanceState -DskipTests {code} IMHO this is a little confusing because I would have expected -DskipTests would skip all tests. What do you think? > hadoop-yarn-applications-catalog-webapp doesn't respect mvn test -D parameter > - > > Key: YARN-9631 > URL: https://issues.apache.org/jira/browse/YARN-9631 > Project: Hadoop YARN > Issue Type: Improvement >Affects Versions: 3.3.0 > Environment: > >Reporter: Wei-Chiu Chuang >Assignee: Eric Yang >Priority: Major > Attachments: YARN-9631.001.patch, YARN-9631.002.patch, > YARN-9631.003.patch > > > When I run a HDFS test {{mvn test -Dtest=TestMaintenanceState}} from the > Hadoop source code root, hadoop-yarn-applications-catalog-webapp doesn't > respect the test filter and always runs its tests. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-9607) Auto-configuring rollover-size of IFile format for non-appendable filesystems
[ https://issues.apache.org/jira/browse/YARN-9607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16867857#comment-16867857 ] Steve Loughran commented on YARN-9607: -- HADOOP-15691 proposes adding a new method, getPathCapabilities to allow you check exactly that. Sadly, its received approximate 0 reviews, and is only something I'm working on in my spare time. I would like to see it in, and given that YARN is shipped in sync with everything else, now is a good time for other people to offer to get in involved in this If I offer to update that patch, who in this JIRA discussion promises to review it with a goal of getting it committed? As once it is in, the check would as simple as {code} if (fs.hasPathCapability("fs.feature.append", path)) { ... } else { //fallback } {code} that's it: no need to second guess things downstream, write code which only works on some stores, as determined by trial and error and support calls. But: if nobody puts their hand up to say "I need this and will help get it in" -you aren't going to get it > Auto-configuring rollover-size of IFile format for non-appendable filesystems > - > > Key: YARN-9607 > URL: https://issues.apache.org/jira/browse/YARN-9607 > Project: Hadoop YARN > Issue Type: Sub-task > Components: log-aggregation, yarn >Affects Versions: 3.3.0 >Reporter: Adam Antal >Assignee: Adam Antal >Priority: Major > Attachments: YARN-9607.001.patch, YARN-9607.002.patch > > > In YARN-9525, we made IFile format compatible with remote folders with s3a > scheme. In rolling fashioned log-aggregation IFile still fails with the > "append is not supported" error message, which is a known limitation of the > format by design. > There is a workaround though: setting the rollover size in the configuration > of the IFile format, in each rolling cycle a new aggregated log file will be > created, thus we eliminated the append from the process. Setting this config > globally would cause performance problems in the regular log-aggregation, so > I'm suggesting to enforcing this config to zero, if the scheme of the URI is > s3a (or any other non-appendable filesystem). -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-9448) Fix Opportunistic Scheduling for node local allocations.
[ https://issues.apache.org/jira/browse/YARN-9448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16867787#comment-16867787 ] Abhishek Modi commented on YARN-9448: - [~baktha] one of the ways to reproduce it is to ask for two O containers on a two node cluster: # any (no preference) # node local container on node 1. Without this patch, there is 50% probability that 2nd container won't get on node 1. > Fix Opportunistic Scheduling for node local allocations. > > > Key: YARN-9448 > URL: https://issues.apache.org/jira/browse/YARN-9448 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Abhishek Modi >Assignee: Abhishek Modi >Priority: Major > Fix For: 3.3.0 > > Attachments: YARN-9448.001.patch, YARN-9448.002.patch, > YARN-9448.003.patch, YARN-9448.004.patch > > > Right now, opportunistic container might not get allocated on rack local node > even if it's available. > Nodes are right now blacklisted if any container except node local container > is allocated on that node. In case, if previously container was allocated on > that node, that node wouldn't be even considered even if there is an ask for > node local request. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6740) Federation Router (hiding multiple RMs for ApplicationClientProtocol) phase 2
[ https://issues.apache.org/jira/browse/YARN-6740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16867786#comment-16867786 ] Abhishek Modi commented on YARN-6740: - [~hunhun] I think this has already been implemented as part of some other jira. I will cross check it and will update this jira. > Federation Router (hiding multiple RMs for ApplicationClientProtocol) phase 2 > - > > Key: YARN-6740 > URL: https://issues.apache.org/jira/browse/YARN-6740 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Giovanni Matteo Fumarola >Assignee: Abhishek Modi >Priority: Major > > This JIRA tracks the implementation of the layer for routing > ApplicaitonClientProtocol requests to the appropriate RM(s) in a federated > YARN cluster. > Under the YARN-3659 we only implemented getNewApplication, submitApplication, > forceKillApplication and getApplicationReport to execute applications E2E. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-9374) HBaseTimelineWriterImpl sync writes has to avoid thread blocking if storage down
[ https://issues.apache.org/jira/browse/YARN-9374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16867781#comment-16867781 ] Hadoop QA commented on YARN-9374: - | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 33s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 12s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 40s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 45s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 56s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 51s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 34s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 31s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 12s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 3m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 20s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 31s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 34s{color} | {color:green} hadoop-yarn-server-timelineservice-hbase-client in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 11m 51s{color} | {color:green} hadoop-yarn-server-timelineservice-hbase-tests in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 26s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 65m 52s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:bdbca0e | | JIRA Issue | YARN-9374 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12972214/YARN-9374-004.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall
[jira] [Commented] (YARN-9607) Auto-configuring rollover-size of IFile format for non-appendable filesystems
[ https://issues.apache.org/jira/browse/YARN-9607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16867775#comment-16867775 ] Hadoop QA commented on YARN-9607: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 25m 48s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 53s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 46s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 33s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 52s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 45s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 28s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 48s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 40s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 26s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common: The patch generated 1 new + 24 unchanged - 0 fixed = 25 total (was 24) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 36s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 43s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 46s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 32s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 87m 18s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=18.09.5 Server=18.09.5 Image:yetus/hadoop:bdbca0e53b4 | | JIRA Issue | YARN-9607 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12972213/YARN-9607.002.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 4bf29579d092 4.15.0-48-generic #51-Ubuntu SMP Wed Apr 3 08:28:49 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 9d68425 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_212 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-YARN-Build/24286/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common.txt | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/24286/testReport/ | | Max. process+thread count | 307 (vs. ulimit of 5500) | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common
[jira] [Commented] (YARN-9374) HBaseTimelineWriterImpl sync writes has to avoid thread blocking if storage down
[ https://issues.apache.org/jira/browse/YARN-9374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16867747#comment-16867747 ] Prabhu Joseph commented on YARN-9374: - Thanks [~snemeth] for reviewing. 1. Have changed the variable name from hbi to writer. 2. Was assuming wrongly that Exception at that point will only mean HbaseStorageMonitor failed to detect Hbase Up. Have modified the testcase little bit and the assertion message to be more clear. > HBaseTimelineWriterImpl sync writes has to avoid thread blocking if storage > down > > > Key: YARN-9374 > URL: https://issues.apache.org/jira/browse/YARN-9374 > Project: Hadoop YARN > Issue Type: Sub-task > Components: ATSv2 >Affects Versions: 3.2.0 >Reporter: Prabhu Joseph >Assignee: Prabhu Joseph >Priority: Major > Attachments: YARN-9374-001.patch, YARN-9374-002.patch, > YARN-9374-003.patch, YARN-9374-004.patch > > > HBaseTimelineWriterImpl sync writes has to avoid thread blocking if storage > is down. Currently we check if hbase storage is down in TimelineReader before > reading entities and fail immediately in YARN-8302. Similar fix is needed for > write. Async is handled in YARN-9335. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-9374) HBaseTimelineWriterImpl sync writes has to avoid thread blocking if storage down
[ https://issues.apache.org/jira/browse/YARN-9374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Prabhu Joseph updated YARN-9374: Attachment: YARN-9374-004.patch > HBaseTimelineWriterImpl sync writes has to avoid thread blocking if storage > down > > > Key: YARN-9374 > URL: https://issues.apache.org/jira/browse/YARN-9374 > Project: Hadoop YARN > Issue Type: Sub-task > Components: ATSv2 >Affects Versions: 3.2.0 >Reporter: Prabhu Joseph >Assignee: Prabhu Joseph >Priority: Major > Attachments: YARN-9374-001.patch, YARN-9374-002.patch, > YARN-9374-003.patch, YARN-9374-004.patch > > > HBaseTimelineWriterImpl sync writes has to avoid thread blocking if storage > is down. Currently we check if hbase storage is down in TimelineReader before > reading entities and fail immediately in YARN-8302. Similar fix is needed for > write. Async is handled in YARN-9335. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-9607) Auto-configuring rollover-size of IFile format for non-appendable filesystems
[ https://issues.apache.org/jira/browse/YARN-9607?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adam Antal updated YARN-9607: - Attachment: YARN-9607.002.patch > Auto-configuring rollover-size of IFile format for non-appendable filesystems > - > > Key: YARN-9607 > URL: https://issues.apache.org/jira/browse/YARN-9607 > Project: Hadoop YARN > Issue Type: Sub-task > Components: log-aggregation, yarn >Affects Versions: 3.3.0 >Reporter: Adam Antal >Assignee: Adam Antal >Priority: Major > Attachments: YARN-9607.001.patch, YARN-9607.002.patch > > > In YARN-9525, we made IFile format compatible with remote folders with s3a > scheme. In rolling fashioned log-aggregation IFile still fails with the > "append is not supported" error message, which is a known limitation of the > format by design. > There is a workaround though: setting the rollover size in the configuration > of the IFile format, in each rolling cycle a new aggregated log file will be > created, thus we eliminated the append from the process. Setting this config > globally would cause performance problems in the regular log-aggregation, so > I'm suggesting to enforcing this config to zero, if the scheme of the URI is > s3a (or any other non-appendable filesystem). -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-9607) Auto-configuring rollover-size of IFile format for non-appendable filesystems
[ https://issues.apache.org/jira/browse/YARN-9607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16867716#comment-16867716 ] Adam Antal commented on YARN-9607: -- Uploaded patch v2. There has been changed a lot since the last updated, and let me update each and every one of you: [~sunilg] I think introducing a global config is not a good idea. The problem is that a global configuration is not depending on the filesystem: we still have to get the information whether the filesystem supports append or not, and it makes no sense to disable that configuration: that would make it broken anyways. The cleanest and easiest solution to collect the schemes that we want to support, and add a contract-based test (look YARN-9525). [~wangda]: the config is now enforced. I also removed s3n since it's deprecated, and I missed the fact the wasb and adls supports append (according to their filesystem contract xml). [~ste...@apache.org], I don't know whether do we have to explicitly define the set of filesystems where append is not supported. I didn't find any elegant way to get this information (whether append is supported) directly from either the FileSystem or its subclass. Any suggestions on that one? [~snemeth]: Since the tests got removed, so the first point of yours is no longer valid. The other ones are incorporated in the patch. The tests got removed, since I make a comprehensive filesystem contract based test in YARN-9525 which will also incorporate the checking of rollover size. Jenkins will probably say a -1 on this, but this is a precondition of YARN-9525, so we have to push this in first. Let me know if you have any concerns. > Auto-configuring rollover-size of IFile format for non-appendable filesystems > - > > Key: YARN-9607 > URL: https://issues.apache.org/jira/browse/YARN-9607 > Project: Hadoop YARN > Issue Type: Sub-task > Components: log-aggregation, yarn >Affects Versions: 3.3.0 >Reporter: Adam Antal >Assignee: Adam Antal >Priority: Major > Attachments: YARN-9607.001.patch, YARN-9607.002.patch > > > In YARN-9525, we made IFile format compatible with remote folders with s3a > scheme. In rolling fashioned log-aggregation IFile still fails with the > "append is not supported" error message, which is a known limitation of the > format by design. > There is a workaround though: setting the rollover size in the configuration > of the IFile format, in each rolling cycle a new aggregated log file will be > created, thus we eliminated the append from the process. Setting this config > globally would cause performance problems in the regular log-aggregation, so > I'm suggesting to enforcing this config to zero, if the scheme of the URI is > s3a (or any other non-appendable filesystem). -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-9574) ArtifactId of MaWo application is wrong
[ https://issues.apache.org/jira/browse/YARN-9574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16867668#comment-16867668 ] Wanqiang Ji commented on YARN-9574: --- Thanks [~eyang] > ArtifactId of MaWo application is wrong > --- > > Key: YARN-9574 > URL: https://issues.apache.org/jira/browse/YARN-9574 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Wanqiang Ji >Assignee: Wanqiang Ji >Priority: Major > Fix For: 3.3.0 > > Attachments: YARN-9574.001.patch, YARN-9574.002.patch > > > We should renamed "hadoop-applications-mawo" and > "hadoop-applications-mawo-core" with We should renamed > "hadoop-yarn-applications-mawo" and "hadoop-yarn-applications-mawo-core". -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-9374) HBaseTimelineWriterImpl sync writes has to avoid thread blocking if storage down
[ https://issues.apache.org/jira/browse/YARN-9374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16867631#comment-16867631 ] Szilard Nemeth commented on YARN-9374: -- Hi [~Prabhu Joseph]! Thanks for this patch! Just a minor comment: {code:java} HBaseTimelineWriterImpl hbi = new HBaseTimelineWriterImpl(); {code} Could you call hbi as writer instead? One more thing: I can understand what happens here: {code:java} util.shutdownMiniHBaseCluster(); GenericTestUtils.waitFor(() -> hbi.isHBaseDown(), 1000, 10); boolean exceptionCaught = false; try{ hbi.write(new TimelineCollectorContext("ATS1", "user1", "flow2", "AB7822C10F", 1002345678919L, appId), te, UserGroupInformation.createRemoteUser("user1")); } catch (Exception e) { exceptionCaught = true; } assertTrue("HBaseStorageMonitor failed to detect HBase Down", exceptionCaught); {code} So here, you are expecting an exception because you call write on the writer and HBase is down. Right after this code block, you have: {code:java} util.startMiniHBaseCluster(1, 1); GenericTestUtils.waitFor(() -> !hbi.isHBaseDown(), 1000, 10); try { hbi.write(new TimelineCollectorContext("ATS", "user1", "flow3", "AB7822C10F", 1002345678919L, appId), te, UserGroupInformation.createRemoteUser("user1")); } catch (Exception e) { Assert.fail("HbaseStorageMonitor failed to detect HBase Up"); } {code} I don't really get this. You are simulating HBase is up again, then trying to write a timeline entry. But if an exception is thrown, I don't think you can be sure that is because the HbaseStorageMonitor failed to detect HBase is up again, so I would rethink the assertion message. What was your intention here? Thanks! > HBaseTimelineWriterImpl sync writes has to avoid thread blocking if storage > down > > > Key: YARN-9374 > URL: https://issues.apache.org/jira/browse/YARN-9374 > Project: Hadoop YARN > Issue Type: Sub-task > Components: ATSv2 >Affects Versions: 3.2.0 >Reporter: Prabhu Joseph >Assignee: Prabhu Joseph >Priority: Major > Attachments: YARN-9374-001.patch, YARN-9374-002.patch, > YARN-9374-003.patch > > > HBaseTimelineWriterImpl sync writes has to avoid thread blocking if storage > is down. Currently we check if hbase storage is down in TimelineReader before > reading entities and fail immediately in YARN-8302. Similar fix is needed for > write. Async is handled in YARN-9335. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6740) Federation Router (hiding multiple RMs for ApplicationClientProtocol) phase 2
[ https://issues.apache.org/jira/browse/YARN-6740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16867608#comment-16867608 ] hunshenshi commented on YARN-6740: -- still on developing? > Federation Router (hiding multiple RMs for ApplicationClientProtocol) phase 2 > - > > Key: YARN-6740 > URL: https://issues.apache.org/jira/browse/YARN-6740 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Giovanni Matteo Fumarola >Assignee: Abhishek Modi >Priority: Major > > This JIRA tracks the implementation of the layer for routing > ApplicaitonClientProtocol requests to the appropriate RM(s) in a federated > YARN cluster. > Under the YARN-3659 we only implemented getNewApplication, submitApplication, > forceKillApplication and getApplicationReport to execute applications E2E. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6695) Race condition in RM for publishing container events vs appFinished events causes NPE
[ https://issues.apache.org/jira/browse/YARN-6695?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16867599#comment-16867599 ] K G Bakthavachalam commented on YARN-6695: -- [~Prabhu Joseph] can u provide brief info how to reproduce this issue > Race condition in RM for publishing container events vs appFinished events > causes NPE > -- > > Key: YARN-6695 > URL: https://issues.apache.org/jira/browse/YARN-6695 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Rohith Sharma K S >Assignee: Prabhu Joseph >Priority: Critical > Fix For: 3.3.0, 3.2.1, 3.1.3 > > Attachments: YARN-6695-002.patch, YARN-6695.001.patch > > > When RM publishes container events i.e by enabling > *yarn.rm.system-metrics-publisher.emit-container-events*, there is race > condition for processing events > vs appFinished event that removes appId from collector list which cause NPE. > Look at the below trace where appId is removed from collectors first and then > corresponding events are processed. > {noformat} > 2017-06-06 19:28:48,896 INFO capacity.ParentQueue > (ParentQueue.java:removeApplication(472)) - Application removed - appId: > application_1496758895643_0005 user: root leaf-queue of parent: root > #applications: 0 > 2017-06-06 19:28:48,921 INFO collector.TimelineCollectorManager > (TimelineCollectorManager.java:remove(190)) - The collector service for > application_1496758895643_0005 was removed > 2017-06-06 19:28:48,922 ERROR metrics.TimelineServiceV2Publisher > (TimelineServiceV2Publisher.java:putEntity(451)) - Error when publishing > entity TimelineEntity[type='YARN_CONTAINER', > id='container_e01_1496758895643_0005_01_02'] > java.lang.NullPointerException > at > org.apache.hadoop.yarn.server.resourcemanager.metrics.TimelineServiceV2Publisher.putEntity(TimelineServiceV2Publisher.java:448) > at > org.apache.hadoop.yarn.server.resourcemanager.metrics.TimelineServiceV2Publisher.access$100(TimelineServiceV2Publisher.java:72) > at > org.apache.hadoop.yarn.server.resourcemanager.metrics.TimelineServiceV2Publisher$TimelineV2EventHandler.handle(TimelineServiceV2Publisher.java:480) > at > org.apache.hadoop.yarn.server.resourcemanager.metrics.TimelineServiceV2Publisher$TimelineV2EventHandler.handle(TimelineServiceV2Publisher.java:469) > at > org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:201) > at > org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:127) > at java.lang.Thread.run(Thread.java:745) > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-9314) Fair Scheduler: Queue Info mistake when configured same queue name at same level
[ https://issues.apache.org/jira/browse/YARN-9314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16867568#comment-16867568 ] Hadoop QA commented on YARN-9314: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 8m 12s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 5m 29s{color} | {color:red} root in trunk failed. {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 49s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 30s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 47s{color} | {color:green} trunk passed {color} | | {color:red}-1{color} | {color:red} shadedclient {color} | {color:red} 3m 22s{color} | {color:red} branch has errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 18s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 41s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 27s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: The patch generated 2 new + 13 unchanged - 0 fixed = 15 total (was 13) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:red}-1{color} | {color:red} shadedclient {color} | {color:red} 3m 10s{color} | {color:red} patch has errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 26s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 79m 34s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 40s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}108m 36s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:bdbca0e | | JIRA Issue | YARN-9314 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12972195/YARN-9314_2.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 8d65ae480be6 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 48e564f | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_212 | | mvninstall | https://builds.apache.org/job/PreCommit-YARN-Build/24285/artifact/out/branch-mvninstall-root.txt | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-YARN-Build/24285/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt | | unit |
[jira] [Commented] (YARN-9448) Fix Opportunistic Scheduling for node local allocations.
[ https://issues.apache.org/jira/browse/YARN-9448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16867516#comment-16867516 ] K G Bakthavachalam commented on YARN-9448: -- [~abmodi] can u give brief information how to reproduce the issue > Fix Opportunistic Scheduling for node local allocations. > > > Key: YARN-9448 > URL: https://issues.apache.org/jira/browse/YARN-9448 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Abhishek Modi >Assignee: Abhishek Modi >Priority: Major > Fix For: 3.3.0 > > Attachments: YARN-9448.001.patch, YARN-9448.002.patch, > YARN-9448.003.patch, YARN-9448.004.patch > > > Right now, opportunistic container might not get allocated on rack local node > even if it's available. > Nodes are right now blacklisted if any container except node local container > is allocated on that node. In case, if previously container was allocated on > that node, that node wouldn't be even considered even if there is an ask for > node local request. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-9314) Fair Scheduler: Queue Info mistake when configured same queue name at same level
[ https://issues.apache.org/jira/browse/YARN-9314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16867481#comment-16867481 ] Shen Yinjie commented on YARN-9314: --- [~fengyongshe] had handed over this issue to me offline,I upload a new patch as you suggested,please review.[~wilfreds] > Fair Scheduler: Queue Info mistake when configured same queue name at same > level > > > Key: YARN-9314 > URL: https://issues.apache.org/jira/browse/YARN-9314 > Project: Hadoop YARN > Issue Type: Bug >Affects Versions: 3.1.0 >Reporter: fengyongshe >Assignee: Shen Yinjie >Priority: Major > Attachments: Fair Scheduler Mistake when configured same queue at > same level.png, YARN-9314_2.patch, YARN-9341.patch > > > The Queue Info is configured in fair-scheduler.xml like below > > {color:#ff}{color} > 3072mb,3vcores > 4096mb,4vcores > > 1024mb,1vcores > 2048mb,2vcores > Charlie > > > {color:#ff}{color} > 1024mb,1vcores > 2048mb,2vcores > > > {color:#33}The Queue root.deva configured last will override existing > root.deva{color}{color:#33} in root.deva.sample, like the > {color}attachment > > root.deva > ||Used Resources:|| > ||Min Resources:|. => should be <3072mb,3vcores>| > ||Max Resources:|. => should be <4096mb,4vcores>| > ||Reserved Resources:|| > ||Steady Fair Share:|| > ||Instantaneous Fair Share:|| > > root.deva.sample > ||Min Resources:|| > ||Max Resources:|| > ||Reserved Resources:|| > ||Steady Fair Share:|| > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-9314) Fair Scheduler: Queue Info mistake when configured same queue name at same level
[ https://issues.apache.org/jira/browse/YARN-9314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shen Yinjie updated YARN-9314: -- Attachment: YARN-9314_2.patch > Fair Scheduler: Queue Info mistake when configured same queue name at same > level > > > Key: YARN-9314 > URL: https://issues.apache.org/jira/browse/YARN-9314 > Project: Hadoop YARN > Issue Type: Bug >Affects Versions: 3.1.0 >Reporter: fengyongshe >Assignee: Shen Yinjie >Priority: Major > Attachments: Fair Scheduler Mistake when configured same queue at > same level.png, YARN-9314_2.patch, YARN-9341.patch > > > The Queue Info is configured in fair-scheduler.xml like below > > {color:#ff}{color} > 3072mb,3vcores > 4096mb,4vcores > > 1024mb,1vcores > 2048mb,2vcores > Charlie > > > {color:#ff}{color} > 1024mb,1vcores > 2048mb,2vcores > > > {color:#33}The Queue root.deva configured last will override existing > root.deva{color}{color:#33} in root.deva.sample, like the > {color}attachment > > root.deva > ||Used Resources:|| > ||Min Resources:|. => should be <3072mb,3vcores>| > ||Max Resources:|. => should be <4096mb,4vcores>| > ||Reserved Resources:|| > ||Steady Fair Share:|| > ||Instantaneous Fair Share:|| > > root.deva.sample > ||Min Resources:|| > ||Max Resources:|| > ||Reserved Resources:|| > ||Steady Fair Share:|| > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org