[jira] [Created] (YARN-10161) TestRouterWebServicesREST is corrupting STDOUT
Jim Brennan created YARN-10161: -- Summary: TestRouterWebServicesREST is corrupting STDOUT Key: YARN-10161 URL: https://issues.apache.org/jira/browse/YARN-10161 Project: Hadoop YARN Issue Type: Test Components: yarn Affects Versions: 2.10.0 Reporter: Jim Brennan TestRouterWebServicesREST is creating processes that inherit stdin/stdout from the current process, so the output from those jobs goes into the standard output of mvn test. Here's an example from a recent build: {noformat} [WARNING] Corrupted STDOUT by directly writing to native stream in forked JVM 1. See FAQ web page and the dump file /testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/target/surefire-reports/2020-02-24T08-00-54_776-jvmRun1.dumpstream [INFO] Tests run: 41, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 41.644 s - in org.apache.hadoop.yarn.server.router.webapp.TestRouterWebServicesREST [WARNING] ForkStarter IOException: 506 INFO [main] resourcemanager.ResourceManager (LogAdapter.java:info(49)) - STARTUP_MSG: 522 INFO [main] resourcemanager.ResourceManager (LogAdapter.java:info(49)) - registered UNIX signal handlers for [TERM, HUP, INT] 876 INFO [main] conf.Configuration (Configuration.java:getConfResourceAsInputStream(2588)) - core-site.xml not found 879 INFO [main] security.Groups (Groups.java:refresh(402)) - clearing userToGroupsMap cache 930 INFO [main] conf.Configuration (Configuration.java:getConfResourceAsInputStream(2588)) - resource-types.xml not found 930 INFO [main] resource.ResourceUtils (ResourceUtils.java:addResourcesFileToConf(421)) - Unable to find 'resource-types.xml'. 940 INFO [main] resource.ResourceUtils (ResourceUtils.java:addMandatoryResources(126)) - Adding resource type - name = memory-mb, units = Mi, type = COUNTABLE 940 INFO [main] resource.ResourceUtils (ResourceUtils.java:addMandatoryResources(135)) - Adding resource type - name = vcores, units = , type = COUNTABLE 974 INFO [main] conf.Configuration (Configuration.java:getConfResourceAsInputStream(2591)) - found resource yarn-site.xml at file:/testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/target/test-classes/yarn-site.xml 001 INFO [main] event.AsyncDispatcher (AsyncDispatcher.java:register(227)) - Registering class org.apache.hadoop.yarn.server.resourcemanager.RMFatalEventType for class org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$RMFatalEventDispatcher 053 INFO [main] security.NMTokenSecretManagerInRM (NMTokenSecretManagerInRM.java:(75)) - NMTokenKeyRollingInterval: 8640ms and NMTokenKeyActivationDelay: 90ms 060 INFO [main] security.RMContainerTokenSecretManager (RMContainerTokenSecretManager.java:(79)) - ContainerTokenKeyRollingInterval: 8640ms and ContainerTokenKeyActivationDelay: 90ms ... {noformat} It seems like these processes should be rerouting stdout/stderr to a file instead of dumping it to the console. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org
[jira] [Created] (YARN-10160) Add auto queue creation related configs to RMWebService#CapacitySchedulerQueueInfo
Prabhu Joseph created YARN-10160: Summary: Add auto queue creation related configs to RMWebService#CapacitySchedulerQueueInfo Key: YARN-10160 URL: https://issues.apache.org/jira/browse/YARN-10160 Project: Hadoop YARN Issue Type: Improvement Affects Versions: 3.3.0 Reporter: Prabhu Joseph Assignee: Prabhu Joseph Add auto queue creation related configs to RMWebService#CapacitySchedulerQueueInfo. {code} yarn.scheduler.capacity..auto-create-child-queue.enabled yarn.scheduler.capacity..leaf-queue-template. {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org
Re: [DISCUSS] EOL Hadoop branch-2.8
Looking at the EOL policy wiki: https://cwiki.apache.org/confluence/display/HADOOP/EOL+%28End-of-life%29+Release+Branches The Hadoop community can still elect to make security update for EOL'ed releases. I think the EOL is to give more clarity to downstream applications (such as HBase) the guidance of which Hadoop release lines are still active. Additionally, I don't think it is sustainable to maintain 6 concurrent release lines in this big project, which is why I wanted to start this discussion. Thoughts? On Mon, Feb 24, 2020 at 10:22 AM Sunil Govindan wrote: > Hi Wei-Chiu > > Extremely sorry for the late reply here. > Cud u pls help to add more clarity on defining what will happen for > branch-2.8 when we call EOL. > Does this mean that, no more release coming out from this branch, or some > more additional guidelines? > > - Sunil > > > On Mon, Feb 24, 2020 at 11:47 PM Wei-Chiu Chuang > wrote: > > > This thread has been running for 7 days and no -1. > > > > Don't think we've established a formal EOL process, but to publicize the > > EOL, I am going to file a jira, update the wiki and post the announcement > > to general@ and user@ > > > > On Wed, Feb 19, 2020 at 1:40 PM Dinesh Chitlangia > > > wrote: > > > > > Thanks Wei-Chiu for initiating this. > > > > > > +1 for 2.8 EOL. > > > > > > On Tue, Feb 18, 2020 at 10:48 PM Akira Ajisaka > > > wrote: > > > > > > > Thanks Wei-Chiu for starting the discussion, > > > > > > > > +1 for the EoL. > > > > > > > > -Akira > > > > > > > > On Tue, Feb 18, 2020 at 4:59 PM Ayush Saxena > > wrote: > > > > > > > > > Thanx Wei-Chiu for initiating this > > > > > +1 for marking 2.8 EOL > > > > > > > > > > -Ayush > > > > > > > > > > > On 17-Feb-2020, at 11:14 PM, Wei-Chiu Chuang > > > > > wrote: > > > > > > > > > > > > The last Hadoop 2.8.x release, 2.8.5, was GA on September 15th > > 2018. > > > > > > > > > > > > It's been 17 months since the release and the community by and > > large > > > > have > > > > > > moved up to 2.9/2.10/3.x. > > > > > > > > > > > > With Hadoop 3.3.0 over the horizon, is it time to start the EOL > > > > > discussion > > > > > > and reduce the number of active branches? > > > > > > > > > > > - > > > > > To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org > > > > > For additional commands, e-mail: common-dev-h...@hadoop.apache.org > > > > > > > > > > > > > > > > > > > >
Re: [DISCUSS] EOL Hadoop branch-2.8
Hi Wei-Chiu Extremely sorry for the late reply here. Cud u pls help to add more clarity on defining what will happen for branch-2.8 when we call EOL. Does this mean that, no more release coming out from this branch, or some more additional guidelines? - Sunil On Mon, Feb 24, 2020 at 11:47 PM Wei-Chiu Chuang wrote: > This thread has been running for 7 days and no -1. > > Don't think we've established a formal EOL process, but to publicize the > EOL, I am going to file a jira, update the wiki and post the announcement > to general@ and user@ > > On Wed, Feb 19, 2020 at 1:40 PM Dinesh Chitlangia > wrote: > > > Thanks Wei-Chiu for initiating this. > > > > +1 for 2.8 EOL. > > > > On Tue, Feb 18, 2020 at 10:48 PM Akira Ajisaka > > wrote: > > > > > Thanks Wei-Chiu for starting the discussion, > > > > > > +1 for the EoL. > > > > > > -Akira > > > > > > On Tue, Feb 18, 2020 at 4:59 PM Ayush Saxena > wrote: > > > > > > > Thanx Wei-Chiu for initiating this > > > > +1 for marking 2.8 EOL > > > > > > > > -Ayush > > > > > > > > > On 17-Feb-2020, at 11:14 PM, Wei-Chiu Chuang > > > wrote: > > > > > > > > > > The last Hadoop 2.8.x release, 2.8.5, was GA on September 15th > 2018. > > > > > > > > > > It's been 17 months since the release and the community by and > large > > > have > > > > > moved up to 2.9/2.10/3.x. > > > > > > > > > > With Hadoop 3.3.0 over the horizon, is it time to start the EOL > > > > discussion > > > > > and reduce the number of active branches? > > > > > > > > - > > > > To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org > > > > For additional commands, e-mail: common-dev-h...@hadoop.apache.org > > > > > > > > > > > > > >
Re: [DISCUSS] EOL Hadoop branch-2.8
This thread has been running for 7 days and no -1. Don't think we've established a formal EOL process, but to publicize the EOL, I am going to file a jira, update the wiki and post the announcement to general@ and user@ On Wed, Feb 19, 2020 at 1:40 PM Dinesh Chitlangia wrote: > Thanks Wei-Chiu for initiating this. > > +1 for 2.8 EOL. > > On Tue, Feb 18, 2020 at 10:48 PM Akira Ajisaka > wrote: > > > Thanks Wei-Chiu for starting the discussion, > > > > +1 for the EoL. > > > > -Akira > > > > On Tue, Feb 18, 2020 at 4:59 PM Ayush Saxena wrote: > > > > > Thanx Wei-Chiu for initiating this > > > +1 for marking 2.8 EOL > > > > > > -Ayush > > > > > > > On 17-Feb-2020, at 11:14 PM, Wei-Chiu Chuang > > wrote: > > > > > > > > The last Hadoop 2.8.x release, 2.8.5, was GA on September 15th 2018. > > > > > > > > It's been 17 months since the release and the community by and large > > have > > > > moved up to 2.9/2.10/3.x. > > > > > > > > With Hadoop 3.3.0 over the horizon, is it time to start the EOL > > > discussion > > > > and reduce the number of active branches? > > > > > > - > > > To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org > > > For additional commands, e-mail: common-dev-h...@hadoop.apache.org > > > > > > > > >
[jira] [Created] (YARN-10159) TimelineConnector does not destroy the jersey client
Prabhu Joseph created YARN-10159: Summary: TimelineConnector does not destroy the jersey client Key: YARN-10159 URL: https://issues.apache.org/jira/browse/YARN-10159 Project: Hadoop YARN Issue Type: Sub-task Components: ATSv2 Affects Versions: 3.3.0 Reporter: Prabhu Joseph Assignee: Prabhu Joseph TimelineConnector does not destroy the jersey client. This method must be called when there are not responses pending otherwise undefined behavior will occur. http://javadox.com/com.sun.jersey/jersey-client/1.8/com/sun/jersey/api/client/Client.html#destroy() -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org
[jira] [Created] (YARN-10158) FS-CS converter: convert property yarn.scheduler.fair.update-interval-ms
Peter Bacsko created YARN-10158: --- Summary: FS-CS converter: convert property yarn.scheduler.fair.update-interval-ms Key: YARN-10158 URL: https://issues.apache.org/jira/browse/YARN-10158 Project: Hadoop YARN Issue Type: Sub-task Reporter: Peter Bacsko Assignee: Peter Bacsko -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1420/ [Feb 23, 2020 8:55:39 AM] (ayushsaxena) HDFS-15041. Make MAX_LOCK_HOLD_MS and full queue size configurable. [Feb 23, 2020 6:37:18 PM] (ayushsaxena) HDFS-15176. Enable GcTimePercentage Metric in NameNode's JvmMetrics. -1 overall The following subsystems voted -1: asflicense findbugs pathlen unit xml The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: XML : Parsing Error(s): hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml FindBugs : module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core Class org.apache.hadoop.applications.mawo.server.common.TaskStatus implements Cloneable but does not define or use clone method At TaskStatus.java:does not define or use clone method At TaskStatus.java:[lines 39-346] Equals method for org.apache.hadoop.applications.mawo.server.worker.WorkerId assumes the argument is of type WorkerId At WorkerId.java:the argument is of type WorkerId At WorkerId.java:[line 114] org.apache.hadoop.applications.mawo.server.worker.WorkerId.equals(Object) does not check for null argument At WorkerId.java:null argument At WorkerId.java:[lines 114-115] FindBugs : module:hadoop-cloud-storage-project/hadoop-cos Redundant nullcheck of dir, which is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:[line 66] org.apache.hadoop.fs.cosn.CosNInputStream$ReadBuffer.getBuffer() may expose internal representation by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:[line 87] Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]): new String(byte[]) At CosNativeFileSystemStore.java:[line 199] Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long): new String(byte[]) At CosNativeFileSystemStore.java:[line 178] org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.uploadPart(File, String, String, int) may fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:[line 252] is not discharged Failed junit tests : hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy hadoop.hdfs.TestDFSStorageStateRecovery hadoop.hdfs.TestErasureCodingPolicyWithSnapshot hadoop.hdfs.TestReconstructStripedFile hadoop.hdfs.TestDecommissionWithStripedBackoffMonitor hadoop.hdfs.TestErasureCodingExerciseAPIs hadoop.hdfs.server.namenode.TestQuotaByStorageType hadoop.hdfs.TestReadStripedFileWithDNFailure hadoop.hdfs.TestFileChecksum hadoop.hdfs.TestGetBlocks hadoop.hdfs.TestErasureCodingPoliciesWithRandomECPolicy hadoop.hdfs.TestDFSStripedOutputStream hadoop.hdfs.TestErasureCodeBenchmarkThroughput hadoop.hdfs.TestDFSStripedOutputStreamWithFailure hadoop.hdfs.server.namenode.TestBlockPlacementPolicyRackFaultTolerant hadoop.hdfs.TestSetrepDecreasing hadoop.hdfs.TestDFSStripedInputStream hadoop.hdfs.TestLeaseRecovery
[jira] [Created] (YARN-10157) FS-CS converter: initPropertyActions() is not called without rules file
Peter Bacsko created YARN-10157: --- Summary: FS-CS converter: initPropertyActions() is not called without rules file Key: YARN-10157 URL: https://issues.apache.org/jira/browse/YARN-10157 Project: Hadoop YARN Issue Type: Sub-task Reporter: Peter Bacsko Assignee: Peter Bacsko The method {{FSConfigToCSConfigRuleHandler.initPropertyActions()}} should be invoked even if we don't use the rule file. Otherwise the rule handler will not initialize actions to WARNING. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org
Apache Hadoop qbt Report: branch2.10+JDK7 on Linux/x86
For more details, see https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/606/ No changes -1 overall The following subsystems voted -1: asflicense findbugs hadolint pathlen unit xml The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: XML : Parsing Error(s): hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/empty-configuration.xml hadoop-tools/hadoop-azure/src/config/checkstyle-suppressions.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/crossdomain.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml FindBugs : module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-client Boxed value is unboxed and then immediately reboxed in org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result, byte[], byte[], KeyConverter, ValueConverter, boolean) At ColumnRWHelper.java:then immediately reboxed in org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result, byte[], byte[], KeyConverter, ValueConverter, boolean) At ColumnRWHelper.java:[line 335] Failed junit tests : hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints hadoop.fs.http.client.TestHttpFSFWithWebhdfsFileSystem hadoop.fs.http.client.TestHttpFSFWithSWebhdfsFileSystem hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints hadoop.registry.secure.TestSecureLogins hadoop.yarn.server.timelineservice.security.TestTimelineAuthFilterForV2 cc: https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/606/artifact/out/diff-compile-cc-root-jdk1.7.0_95.txt [4.0K] javac: https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/606/artifact/out/diff-compile-javac-root-jdk1.7.0_95.txt [328K] cc: https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/606/artifact/out/diff-compile-cc-root-jdk1.8.0_242.txt [4.0K] javac: https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/606/artifact/out/diff-compile-javac-root-jdk1.8.0_242.txt [308K] checkstyle: https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/606/artifact/out/diff-checkstyle-root.txt [16M] hadolint: https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/606/artifact/out/diff-patch-hadolint.txt [4.0K] pathlen: https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/606/artifact/out/pathlen.txt [12K] pylint: https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/606/artifact/out/diff-patch-pylint.txt [24K] shellcheck: https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/606/artifact/out/diff-patch-shellcheck.txt [56K] shelldocs: https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/606/artifact/out/diff-patch-shelldocs.txt [8.0K] whitespace: https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/606/artifact/out/whitespace-eol.txt [12M] https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/606/artifact/out/whitespace-tabs.txt [1.3M] xml: https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/606/artifact/out/xml.txt [12K] findbugs: https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/606/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-hbase_hadoop-yarn-server-timelineservice-hbase-client-warnings.html [8.0K] javadoc: https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/606/artifact/out/diff-javadoc-javadoc-root-jdk1.7.0_95.txt [16K] https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/606/artifact/out/diff-javadoc-javadoc-root-jdk1.8.0_242.txt [1.1M] unit: https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/606/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [232K] https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/606/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-httpfs.txt [36K] https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/606/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs_src_contrib_bkjournal.txt [12K]