[jira] [Created] (YARN-10856) Prevent ATS v2 health check REST API call of ATS service itself is disabled.

2021-07-15 Thread Siddharth Ahuja (Jira)
Siddharth Ahuja created YARN-10856:
--

 Summary: Prevent ATS v2 health check REST API call of ATS service 
itself is disabled.
 Key: YARN-10856
 URL: https://issues.apache.org/jira/browse/YARN-10856
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: yarn-ui-v2
Affects Versions: 3.3.1
Reporter: Siddharth Ahuja


Currently, even if {{yarn.timeline-service.enabled}} is disabled, the UI2 code 
still goes ahead and performs timeline health check REST API calls, see [1], 
[2] and [3].

This is un-necessary and can cause slowness issues with RM UI2 page loading if 
a firewall is dropping packets on the ATS v2 port (as it is not meant to be 
available e.g. 8188/8190) in the background and the timeout is not yet hit.

This ATSv2 health check REST API call is redundant and should be removed if the 
service itself is disabled.

[1] 
https://github.com/apache/hadoop/blob/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/adapters/timeline-health.js
[2] 
https://github.com/apache/hadoop/blob/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/application.js#L34
[3] 
https://github.com/apache/hadoop/blob/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-app/logs.js#L40



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



[jira] [Created] (YARN-10855) yarn logs cli fails to retrieve logs if any TFile is corrupt or empty

2021-07-15 Thread Jim Brennan (Jira)
Jim Brennan created YARN-10855:
--

 Summary: yarn logs cli fails to retrieve logs if any TFile is 
corrupt or empty
 Key: YARN-10855
 URL: https://issues.apache.org/jira/browse/YARN-10855
 Project: Hadoop YARN
  Issue Type: Bug
  Components: yarn
Affects Versions: 3.3.1, 2.10.1, 3.2.2, 3.4.0
Reporter: Jim Brennan


When attempting to retrieve yarn logs via the CLI command, it failed with the 
following stack trace (on branch-2.10):
{noformat}
yarn logs -applicationId application_1591017890475_1049740 > logs
20/06/05 19:15:50 INFO client.RMProxy: Connecting to ResourceManager 
20/06/05 19:15:51 INFO client.AHSProxy: Connecting to Application History 
server 
Exception in thread "main" java.io.EOFException: Cannot seek to negative offset
at org.apache.hadoop.hdfs.DFSInputStream.seek(DFSInputStream.java:1701)
at 
org.apache.hadoop.fs.FSDataInputStream.seek(FSDataInputStream.java:65)
at org.apache.hadoop.io.file.tfile.BCFile$Reader.(BCFile.java:624)
at org.apache.hadoop.io.file.tfile.TFile$Reader.(TFile.java:804)
at 
org.apache.hadoop.yarn.logaggregation.AggregatedLogFormat$LogReader.(AggregatedLogFormat.java:503)
at 
org.apache.hadoop.yarn.logaggregation.LogCLIHelpers.dumpAllContainersLogs(LogCLIHelpers.java:227)
at org.apache.hadoop.yarn.client.cli.LogsCLI.run(LogsCLI.java:333)
at org.apache.hadoop.yarn.client.cli.LogsCLI.main(LogsCLI.java:367) 
{noformat}
The problem was that there was a zero-length TFile for one of the containers in 
the application aggregated log directory in hdfs.  When we removed the zero 
length file, {{yarn logs}} was able to retrieve the logs.

A corrupt or zero length TFile for one container should not prevent loading 
logs for the rest of the application.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86_64

2021-07-15 Thread Apache Jenkins Server
For more details, see 
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/569/

[Jul 14, 2021 7:15:02 AM] (noreply) HADOOP-17672.Remove an invalid comment 
content in the FileContext class. (#2961)
[Jul 14, 2021 11:58:32 AM] (noreply) HADOOP-17795. Provide fallbacks for 
callqueue.impl and scheduler.impl (#3192)




-1 overall


The following subsystems voted -1:
blanks pathlen spotbugs unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
 

spotbugs :

   module:hadoop-tools/hadoop-azure 
   Inconsistent synchronization of 
org.apache.hadoop.fs.azure.NativeAzureFileSystem$NativeAzureFsInputStream.in; 
locked 81% of time Unsynchronized access at NativeAzureFileSystem.java:81% of 
time Unsynchronized access at NativeAzureFileSystem.java:[line 938] 

spotbugs :

   module:hadoop-tools 
   Inconsistent synchronization of 
org.apache.hadoop.fs.azure.NativeAzureFileSystem$NativeAzureFsInputStream.in; 
locked 81% of time Unsynchronized access at NativeAzureFileSystem.java:81% of 
time Unsynchronized access at NativeAzureFileSystem.java:[line 938] 

spotbugs :

   module:root 
   Inconsistent synchronization of 
org.apache.hadoop.fs.azure.NativeAzureFileSystem$NativeAzureFsInputStream.in; 
locked 81% of time Unsynchronized access at NativeAzureFileSystem.java:81% of 
time Unsynchronized access at NativeAzureFileSystem.java:[line 938] 

Failed junit tests :

   hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped 
   hadoop.yarn.csi.client.TestCsiClient 
   hadoop.tools.dynamometer.TestDynamometerInfra 
   hadoop.tools.dynamometer.TestDynamometerInfra 
  

   cc:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/569/artifact/out/results-compile-cc-root.txt
 [96K]

   javac:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/569/artifact/out/results-compile-javac-root.txt
 [364K]

   blanks:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/569/artifact/out/blanks-eol.txt
 [13M]
  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/569/artifact/out/blanks-tabs.txt
 [2.0M]

   checkstyle:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/569/artifact/out/results-checkstyle-root.txt
 [16M]

   pathlen:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/569/artifact/out/results-pathlen.txt
 [16K]

   pylint:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/569/artifact/out/results-pylint.txt
 [20K]

   shellcheck:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/569/artifact/out/results-shellcheck.txt
 [28K]

   xml:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/569/artifact/out/xml.txt
 [24K]

   javadoc:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/569/artifact/out/results-javadoc-javadoc-root.txt
 [408K]

   spotbugs:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/569/artifact/out/branch-spotbugs-hadoop-tools_hadoop-azure-warnings.html
 [8.0K]
  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/569/artifact/out/branch-spotbugs-hadoop-tools-warnings.html
 [12K]
  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/569/artifact/out/branch-spotbugs-root-warnings.html
 [20K]

   unit:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/569/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 [512K]
  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/569/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-csi.txt
 [20K]
  

Apache Hadoop qbt Report: trunk+JDK11 on Linux/x86_64

2021-07-15 Thread Apache Jenkins Server
For more details, see 
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java11-linux-x86_64/200/

[Jul 13, 2021 2:00:13 PM] (noreply) MAPREDUCE-7356. Remove some duplicate 
dependencies from mapreduce-client's child poms (#3193). Contributed by Viraj 
Jasani.
[Jul 13, 2021 9:18:59 PM] (noreply) HDFS-15785. Datanode to support using DNS 
to resolve nameservices to IP addresses to get list of namenodes. (#2639)
[Jul 14, 2021 1:11:50 AM] (Konstantin Shvachko) HADOOP-17028. ViewFS should 
initialize mounted target filesystems lazily. Contributed by Abhishek Das 
(#2260)
[Jul 14, 2021 7:15:02 AM] (noreply) HADOOP-17672.Remove an invalid comment 
content in the FileContext class. (#2961)
[Jul 14, 2021 11:58:32 AM] (noreply) HADOOP-17795. Provide fallbacks for 
callqueue.impl and scheduler.impl (#3192)




-1 overall


The following subsystems voted -1:
blanks mvnsite pathlen spotbugs unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
 

spotbugs :

   module:hadoop-hdfs-project/hadoop-hdfs 
   Redundant nullcheck of oldLock, which is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.DataStorage.isPreUpgradableLayout(Storage$StorageDirectory)
 Redundant null check at DataStorage.java:is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.DataStorage.isPreUpgradableLayout(Storage$StorageDirectory)
 Redundant null check at DataStorage.java:[line 695] 
   Redundant nullcheck of metaChannel, which is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.MappableBlockLoader.verifyChecksum(long,
 FileInputStream, FileChannel, String) Redundant null check at 
MappableBlockLoader.java:is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.MappableBlockLoader.verifyChecksum(long,
 FileInputStream, FileChannel, String) Redundant null check at 
MappableBlockLoader.java:[line 138] 
   Redundant nullcheck of blockChannel, which is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.MemoryMappableBlockLoader.load(long,
 FileInputStream, FileInputStream, String, ExtendedBlockId) Redundant null 
check at MemoryMappableBlockLoader.java:is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.MemoryMappableBlockLoader.load(long,
 FileInputStream, FileInputStream, String, ExtendedBlockId) Redundant null 
check at MemoryMappableBlockLoader.java:[line 75] 
   Redundant nullcheck of blockChannel, which is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.NativePmemMappableBlockLoader.load(long,
 FileInputStream, FileInputStream, String, ExtendedBlockId) Redundant null 
check at NativePmemMappableBlockLoader.java:is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.NativePmemMappableBlockLoader.load(long,
 FileInputStream, FileInputStream, String, ExtendedBlockId) Redundant null 
check at NativePmemMappableBlockLoader.java:[line 85] 
   Redundant nullcheck of metaChannel, which is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.NativePmemMappableBlockLoader.verifyChecksumAndMapBlock(NativeIO$POSIX$PmemMappedRegion,
 long, FileInputStream, FileChannel, String) Redundant null check at 
NativePmemMappableBlockLoader.java:is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.NativePmemMappableBlockLoader.verifyChecksumAndMapBlock(NativeIO$POSIX$PmemMappedRegion,
 long, FileInputStream, FileChannel, String) Redundant null check at 
NativePmemMappableBlockLoader.java:[line 130] 
   
org.apache.hadoop.hdfs.server.namenode.top.window.RollingWindowManager$UserCounts
 doesn't override java.util.ArrayList.equals(Object) At 
RollingWindowManager.java:At RollingWindowManager.java:[line 1] 

spotbugs :

   module:hadoop-yarn-project/hadoop-yarn 
   Redundant 

Re: [DISCUSS] Tips for improving productivity, workflow in the Hadoop project?

2021-07-15 Thread Brahma Reddy Battula
I agree with Ahmed Hussein…Jira should not be used for number generation..

We can always revisit the jira to see useful discussion at one place…

@wei-chu, +1 on proposal for cleaning the PR’s..


On Thu, 15 Jul 2021 at 9:15 PM, epa...@apache.org  wrote:

>  > I usually use PR comments to discuss about the patch submitted.
> My concern is that still leaves multiple places to look in order to get a
> full picture of an issue.
> -Eric
>
> On Wednesday, July 14, 2021, 7:07:30 PM CDT, Masatake Iwasaki <
> iwasak...@oss.nttdata.co.jp> wrote:
>
>  > - recently, JIRA became some sort of a "number generator" with
> insufficient
> > description/details as the
> >developers and the reviewers spending more time discussing in the PR.
>
> JIRA issues contain useful information in the fields.
> We are leveraging them in development and release process.
>
> * https://yetus.apache.org/documentation/0.13.0/releasedocmaker/
> *
> https://issues.apache.org/jira/secure/Dashboard.jspa?selectPageId=12336122
>
> I usually use PR comments to discuss about the patch submitted.
> JIRA comments are used for background or design discussion before and
> after submitting PR.
> There would be no problem having no comment in minor/trivial JIRA issues.
>
>
> On 2021/07/14 23:50, Ahmed Hussein wrote:
> > Do you consider migrating Jira issues to Github issues?
> >
> > I am a little bit concerned that there are some committers who still
> prefer
> > Jira-precommits over GitHub PR
> > (P.S. I am not a committer).
> >
> > Their point is that Github-PR confuses them with discussions/comments
> being
> > in two places rather than one.
> >
> > Personally, I found several Github-PRs comments discussing the validity
> of
> > the feature/bug.
> > As a result:
> > - recently, JIRA became some sort of a "number generator" with
> insufficient
> > description/details as the
> >developers and the reviewers spending more time discussing in the PR.
> > - the relation between a single Jira and Github-PR is 1-to-M. In order to
> > find related discussions, the user may
> >need to visit every PR (that may include closed ones)
> >
> >
> >
> > On Wed, Jul 14, 2021 at 8:46 AM Steve Loughran
> 
> > wrote:
> >
> >> not sure about stale PR closing; when you've a patch which is still
> pending
> >> review it's not that fun to have it closed.
> >>
> >> maybe better to have review sessions. I recall many, many years ago
> >> attempts to try and catch up with all outstanding patch reviews.
> >>
> >>
> >>
> >>
> >> On Wed, 14 Jul 2021 at 03:00, Akira Ajisaka 
> wrote:
> >>
> >>> Thank you Wei-Chiu for starting the discussion,
> >>>
>  3. JIRA security
> >>> I'm +1 to use private JIRA issues to handle vulnerabilities.
> >>>
>  5. Doc update
> >>> +1, I build the document daily and it helps me fixing documents:
> >>> https://aajisaka.github.io/hadoop-document/ It's great if the latest
> >>> document is built and published by the Apache Hadoop community.
> >>>
> >>> My idea related to GitHub PR:
> >>> 1. Disable the precommit jobs for JIRA, always use GitHub PR. It saves
> >>> costs to configure and debug the precommit jobs.
> >>> https://issues.apache.org/jira/browse/HADOOP-17798
> >>> 2. Improve the pull request template for the contributors
> >>> https://issues.apache.org/jira/browse/HADOOP-17799
> >>>
> >>> Regards,
> >>> Akira
> >>>
> >>> On Tue, Jul 13, 2021 at 12:35 PM Wei-Chiu Chuang 
> >>> wrote:
> 
>  I work on multiple projects and learned a bunch from those
> >> projects.There
>  are nice add-ons that help with productivity. There are things we can
> >> do
> >>> to
>  help us manage the project better.
> 
>  1. Add new issue types.
>  We can add "Epic" jira type to organize a set of related jiras. This
> >>> could
>  be easier to manage than using a regular JIRA and call it "umbrella".
> 
>  2. GitHub Actions
>  I am seeing more projects moving to GitHub Actions for precommits. We
> >>> don't
>  necessarily need to migrate off Jenkins, but there are nice add-ons
> >> that
>  can perform static analysis, catching potential issues. For example,
> >>> Ozone
>  adds SonarQube to post-commit, and exports the report to SonarCloud.
> >>> Other
>  add-ons are available to scan for docker images, vulnerabilities
> scans.
> 
>  3. JIRA security
>  It is possible to set up security level (public/private) in JIRA. This
> >>> can
>  be used to track vulnerability issues and be made only visible to
>  committers. Example: INFRA-15258
>  
> 
>  4. New JIRA fields
>  It's possible to add new fields. For example, we can add a "Reviewer"
>  field, which could help improve the attention to issues.
> 
>  5. Doc update
>  It is possible to set up automation such that the doc on the Hadoop
> >>> website
>  is refreshed for every commit, providing the latest doc to the public.
> 
>  6. 

Re: [DISCUSS] Tips for improving productivity, workflow in the Hadoop project?

2021-07-15 Thread epa...@apache.org
 > I usually use PR comments to discuss about the patch submitted.
My concern is that still leaves multiple places to look in order to get a full 
picture of an issue.
-Eric

On Wednesday, July 14, 2021, 7:07:30 PM CDT, Masatake Iwasaki 
 wrote: 

 > - recently, JIRA became some sort of a "number generator" with insufficient
> description/details as the
>    developers and the reviewers spending more time discussing in the PR.

JIRA issues contain useful information in the fields.
We are leveraging them in development and release process.

* https://yetus.apache.org/documentation/0.13.0/releasedocmaker/
* https://issues.apache.org/jira/secure/Dashboard.jspa?selectPageId=12336122

I usually use PR comments to discuss about the patch submitted.
JIRA comments are used for background or design discussion before and after 
submitting PR.
There would be no problem having no comment in minor/trivial JIRA issues.


On 2021/07/14 23:50, Ahmed Hussein wrote:
> Do you consider migrating Jira issues to Github issues?
> 
> I am a little bit concerned that there are some committers who still prefer
> Jira-precommits over GitHub PR
> (P.S. I am not a committer).
> 
> Their point is that Github-PR confuses them with discussions/comments being
> in two places rather than one.
> 
> Personally, I found several Github-PRs comments discussing the validity of
> the feature/bug.
> As a result:
> - recently, JIRA became some sort of a "number generator" with insufficient
> description/details as the
>    developers and the reviewers spending more time discussing in the PR.
> - the relation between a single Jira and Github-PR is 1-to-M. In order to
> find related discussions, the user may
>    need to visit every PR (that may include closed ones)
> 
> 
> 
> On Wed, Jul 14, 2021 at 8:46 AM Steve Loughran 
> wrote:
> 
>> not sure about stale PR closing; when you've a patch which is still pending
>> review it's not that fun to have it closed.
>>
>> maybe better to have review sessions. I recall many, many years ago
>> attempts to try and catch up with all outstanding patch reviews.
>>
>>
>>
>>
>> On Wed, 14 Jul 2021 at 03:00, Akira Ajisaka  wrote:
>>
>>> Thank you Wei-Chiu for starting the discussion,
>>>
 3. JIRA security
>>> I'm +1 to use private JIRA issues to handle vulnerabilities.
>>>
 5. Doc update
>>> +1, I build the document daily and it helps me fixing documents:
>>> https://aajisaka.github.io/hadoop-document/ It's great if the latest
>>> document is built and published by the Apache Hadoop community.
>>>
>>> My idea related to GitHub PR:
>>> 1. Disable the precommit jobs for JIRA, always use GitHub PR. It saves
>>> costs to configure and debug the precommit jobs.
>>> https://issues.apache.org/jira/browse/HADOOP-17798
>>> 2. Improve the pull request template for the contributors
>>> https://issues.apache.org/jira/browse/HADOOP-17799
>>>
>>> Regards,
>>> Akira
>>>
>>> On Tue, Jul 13, 2021 at 12:35 PM Wei-Chiu Chuang 
>>> wrote:

 I work on multiple projects and learned a bunch from those
>> projects.There
 are nice add-ons that help with productivity. There are things we can
>> do
>>> to
 help us manage the project better.

 1. Add new issue types.
 We can add "Epic" jira type to organize a set of related jiras. This
>>> could
 be easier to manage than using a regular JIRA and call it "umbrella".

 2. GitHub Actions
 I am seeing more projects moving to GitHub Actions for precommits. We
>>> don't
 necessarily need to migrate off Jenkins, but there are nice add-ons
>> that
 can perform static analysis, catching potential issues. For example,
>>> Ozone
 adds SonarQube to post-commit, and exports the report to SonarCloud.
>>> Other
 add-ons are available to scan for docker images, vulnerabilities scans.

 3. JIRA security
 It is possible to set up security level (public/private) in JIRA. This
>>> can
 be used to track vulnerability issues and be made only visible to
 committers. Example: INFRA-15258
 

 4. New JIRA fields
 It's possible to add new fields. For example, we can add a "Reviewer"
 field, which could help improve the attention to issues.

 5. Doc update
 It is possible to set up automation such that the doc on the Hadoop
>>> website
 is refreshed for every commit, providing the latest doc to the public.

 6. Webhook
 It's possible to set up webhook such that every commit in GitHub sends
>> a
 notification to the ASF slack. It can be used for other kinds of
 automation. Sky's the limit.

 Thoughts? What else can do we?
>>>
>>> -
>>> To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
>>> For additional commands, e-mail: common-dev-h...@hadoop.apache.org
>>>
>>>
>>
> 
> 


Apache Hadoop qbt Report: branch-2.10+JDK7 on Linux/x86_64

2021-07-15 Thread Apache Jenkins Server
For more details, see 
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/360/

No changes




-1 overall


The following subsystems voted -1:
asflicense hadolint mvnsite pathlen unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed junit tests :

   hadoop.fs.TestTrash 
   hadoop.util.TestDiskCheckerWithDiskIo 
   hadoop.fs.TestFileUtil 
   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   
hadoop.hdfs.server.blockmanagement.TestReplicationPolicyWithUpgradeDomain 
   hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys 
   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   hadoop.hdfs.server.federation.router.TestRouterNamenodeHeartbeat 
   hadoop.hdfs.server.federation.resolver.order.TestLocalResolver 
   hadoop.hdfs.server.federation.router.TestRouterQuota 
   hadoop.hdfs.server.federation.resolver.TestMultipleDestinationResolver 
   
hadoop.yarn.server.resourcemanager.monitor.invariants.TestMetricsInvariantChecker
 
   hadoop.yarn.server.resourcemanager.TestClientRMService 
   hadoop.mapreduce.jobhistory.TestHistoryViewerPrinter 
   hadoop.yarn.sls.TestSLSRunner 
   hadoop.resourceestimator.service.TestResourceEstimatorService 
   hadoop.resourceestimator.solver.impl.TestLpSolver 
  

   cc:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/360/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/360/artifact/out/diff-compile-javac-root.txt
  [496K]

   checkstyle:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/360/artifact/out/diff-checkstyle-root.txt
  [16M]

   hadolint:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/360/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   mvnsite:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/360/artifact/out/patch-mvnsite-root.txt
  [584K]

   pathlen:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/360/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/360/artifact/out/diff-patch-pylint.txt
  [48K]

   shellcheck:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/360/artifact/out/diff-patch-shellcheck.txt
  [56K]

   shelldocs:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/360/artifact/out/diff-patch-shelldocs.txt
  [48K]

   whitespace:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/360/artifact/out/whitespace-eol.txt
  [12M]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/360/artifact/out/whitespace-tabs.txt
  [1.3M]

   javadoc:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/360/artifact/out/patch-javadoc-root.txt
  [32K]

   unit:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/360/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
  [244K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/360/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [424K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/360/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs_src_contrib_bkjournal.txt
  [12K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/360/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
  [40K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/360/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
  [124K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/360/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-core.txt
  [96K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/360/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt
  [112K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/360/artifact/out/patch-unit-hadoop-tools_hadoop-azure.txt
  [20K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/360/artifact/out/patch-unit-hadoop-tools_hadoop-sls.txt
  [28K]
   

[jira] [Created] (YARN-10854) Support marking inactive node as untracked without configured include path

2021-07-15 Thread Tao Yang (Jira)
Tao Yang created YARN-10854:
---

 Summary: Support marking inactive node as untracked without 
configured include path
 Key: YARN-10854
 URL: https://issues.apache.org/jira/browse/YARN-10854
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: resourcemanager
Reporter: Tao Yang
Assignee: Tao Yang


Currently inactive nodes which have been decommissioned/shutdown/lost for a 
while(specified expiration time defined via 
{{yarn.resourcemanager.node-removal-untracked.timeout-ms}}, 60 seconds by 
default) and not exist in both include and exclude files can be marked as 
untracked nodes and can be removed from RM state. It's very useful when 
auto-scaling is enabled in elastic cloud environment, which can avoid unlimited 
increase of inactive nodes (mostly are decommissioned nodes).

But this only works when the include path is configured, mismatched for most of 
our cloud environments without configured white list of nodes, which can lead 
to easily control for the auto-scaling of nodes without further security 
requirements.

So I propose to support marking inactive node as untracked without configured 
include path, to be compatible with the former versions, we can add a switch 
config for this.

Any thoughts/suggestions/feedbacks are welcome!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org