Re: [DISCUSS] Hadoop 2019 Release Planning

2019-08-12 Thread Jonathan Hung
Hi Wangda, Thanks for starting the discussion. We would also like to
release 2.10.0 which was discussed previously
 and
at various contributor meetups. I'm interested in being release manager for
that.

Thanks,

Jonathan Hung


On Fri, Aug 9, 2019 at 7:59 PM Wangda Tan  wrote:

> Hi all,
>
> Hope this email finds you well
>
> I want to hear your thoughts about what should be the release plan for
> 2019.
>
> In 2018, we released:
> - 1 maintenance release of 2.6
> - 3 maintenance releases of 2.7
> - 3 maintenance releases of 2.8
> - 3 releases of 2.9
> - 4 releases of 3.0
> - 2 releases of 3.1
>
> Total 16 releases in 2018.
>
> In 2019, by far we only have two releases:
> - 1 maintenance release of 3.1
> - 1 minor release of 3.2.
>
> However, the community put a lot of efforts to stabilize features of
> various release branches.
> There're:
> - 217 fixed patches in 3.1.3 [1]
> - 388 fixed patches in 3.2.1 [2]
> - 1172 fixed patches in 3.3.0 [3] (OMG!)
>
> I think it is the time to do maintenance releases of 3.1/3.2 and do a minor
> release for 3.3.0.
>
> In addition, I saw community discussion to do a 2.8.6 release for security
> fixes.
>
> Any other releases? I think there're release plans for Ozone as well. And
> please add your thoughts.
>
> Volunteers welcome! If you have interests to run a release as Release
> Manager (or co-Resource Manager), please respond to this email thread so we
> can coordinate.
>
> Thanks,
> Wangda Tan
>
> [1] project in (YARN, HADOOP, MAPREDUCE, HDFS) AND resolution = Fixed AND
> fixVersion = 3.1.3
> [2] project in (YARN, HADOOP, MAPREDUCE, HDFS) AND resolution = Fixed AND
> fixVersion = 3.2.1
> [3] project in (YARN, HADOOP, MAPREDUCE, HDFS) AND resolution = Fixed AND
> fixVersion = 3.3.0
>


[jira] [Created] (HADOOP-16512) [hadoop-tools] Fix order of actual and expected expression in assert statements

2019-08-12 Thread Adam Antal (JIRA)
Adam Antal created HADOOP-16512:
---

 Summary: [hadoop-tools] Fix order of actual and expected 
expression in assert statements
 Key: HADOOP-16512
 URL: https://issues.apache.org/jira/browse/HADOOP-16512
 Project: Hadoop Common
  Issue Type: Sub-task
Affects Versions: 3.2.0
Reporter: Adam Antal


Fix order of actual and expected expression in assert statements which gives 
misleading message when test case fails. Attached file has some of the places 
where it is placed wrongly.
{code:java}
[ERROR] 
testNodeRemovalGracefully(org.apache.hadoop.yarn.server.resourcemanager.TestResourceTrackerService)
  Time elapsed: 3.385 s  <<< FAILURE!
java.lang.AssertionError: Shutdown nodes should be 0 now expected:<1> but 
was:<0>
{code}
For long term, [AssertJ|http://joel-costigliola.github.io/assertj/] can be used 
for new test cases which avoids such mistakes.

This is a follow-up Jira on the fix for the hadoop-tools project.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16511) [hadoop-hdfs] Fix order of actual and expected expression in assert statements

2019-08-12 Thread Adam Antal (JIRA)
Adam Antal created HADOOP-16511:
---

 Summary: [hadoop-hdfs] Fix order of actual and expected expression 
in assert statements
 Key: HADOOP-16511
 URL: https://issues.apache.org/jira/browse/HADOOP-16511
 Project: Hadoop Common
  Issue Type: Sub-task
Affects Versions: 3.2.0
Reporter: Adam Antal


Fix order of actual and expected expression in assert statements which gives 
misleading message when test case fails. Attached file has some of the places 
where it is placed wrongly.
{code:java}
[ERROR] 
testNodeRemovalGracefully(org.apache.hadoop.yarn.server.resourcemanager.TestResourceTrackerService)
  Time elapsed: 3.385 s  <<< FAILURE!
java.lang.AssertionError: Shutdown nodes should be 0 now expected:<1> but 
was:<0>
{code}
For long term, [AssertJ|http://joel-costigliola.github.io/assertj/] can be used 
for new test cases which avoids such mistakes.

This is a follow-up jira for the hadoop-hdfs project.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16510) [hadoop-common] Fix order of actual and expected expression in assert statements

2019-08-12 Thread Adam Antal (JIRA)
Adam Antal created HADOOP-16510:
---

 Summary: [hadoop-common] Fix order of actual and expected 
expression in assert statements
 Key: HADOOP-16510
 URL: https://issues.apache.org/jira/browse/HADOOP-16510
 Project: Hadoop Common
  Issue Type: Sub-task
Affects Versions: 3.2.0
Reporter: Adam Antal


Fix order of actual and expected expression in assert statements which gives 
misleading message when test case fails. Attached file has some of the places 
where it is placed wrongly.
{code:java}
[ERROR] 
testNodeRemovalGracefully(org.apache.hadoop.yarn.server.resourcemanager.TestResourceTrackerService)
  Time elapsed: 3.385 s  <<< FAILURE!
java.lang.AssertionError: Shutdown nodes should be 0 now expected:<1> but 
was:<0>
{code}
For long term, [AssertJ|http://joel-costigliola.github.io/assertj/] can be used 
for new test cases which avoids such mistakes.

This is a follow-up jira for the hadoop-common project.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Revert of HDFS-12914 breaks the branch-2

2019-08-12 Thread Wei-Chiu Chuang
Heads up,

My bad. This is becoming a mess.

For the context, HDFS-12914 in branch-2 has a test failure, so I reverted
it. But HDFS-13898 used a helper method
(BlockManager#setBlockManagerForTesting()) added by HDFS-12914, so the
revert breaks the build.

Here's what I propose:

(1) File a new Jira to add the missing helper method. I don't want to
revert HDFS-13898  because
ultimately we want to cherry pick HDFS-12914
 into branch-2, and we
still need that missing helper method.

(2) resolve HDFS-12914 since this is already a mess here. The 3.x branches
are good, and I want to 2.x to stay good as well.

(3) I'll file a new Jira to backport HDFS-12914
 to branch-2, later.

This should help bring a little sanity to these branches quickly. Shout out
at me if you have any questions.

Weichiu


Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2019-08-12 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1226/

[Aug 11, 2019 6:17:07 AM] (sunilg) YARN-9729. [UI2] Fix error message for logs 
when ATSv2 is offline.
[Aug 11, 2019 10:11:56 AM] (abmodi) YARN-9657. AbstractLivelinessMonitor add 
serviceName to PingChecker




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core
 
   Class org.apache.hadoop.applications.mawo.server.common.TaskStatus 
implements Cloneable but does not define or use clone method At 
TaskStatus.java:does not define or use clone method At TaskStatus.java:[lines 
39-346] 
   Equals method for 
org.apache.hadoop.applications.mawo.server.worker.WorkerId assumes the argument 
is of type WorkerId At WorkerId.java:the argument is of type WorkerId At 
WorkerId.java:[line 114] 
   
org.apache.hadoop.applications.mawo.server.worker.WorkerId.equals(Object) does 
not check for null argument At WorkerId.java:null argument At 
WorkerId.java:[lines 114-115] 

Failed junit tests :

   hadoop.util.TestBasicDiskValidator 
   hadoop.util.TestDiskChecker 
   hadoop.hdfs.server.datanode.TestLargeBlockReport 
   hadoop.hdfs.TestDecommission 
   hadoop.hdfs.server.federation.router.TestRouterWithSecureStartup 
   hadoop.hdfs.server.federation.security.TestRouterHttpDelegationToken 
   hadoop.tools.dynamometer.TestDynamometerInfra 
   hadoop.tools.dynamometer.TestDynamometerInfra 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1226/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1226/artifact/out/diff-compile-javac-root.txt
  [332K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1226/artifact/out/diff-checkstyle-root.txt
  [17M]

   hadolint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1226/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1226/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1226/artifact/out/diff-patch-pylint.txt
  [220K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1226/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1226/artifact/out/diff-patch-shelldocs.txt
  [44K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1226/artifact/out/whitespace-eol.txt
  [9.6M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1226/artifact/out/whitespace-tabs.txt
  [1.1M]

   xml:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1226/artifact/out/xml.txt
  [16K]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1226/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-mawo_hadoop-yarn-applications-mawo-core-warnings.html
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1226/artifact/out/branch-findbugs-hadoop-hdds_client.txt
  [12K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1226/artifact/out/branch-findbugs-hadoop-hdds_container-service.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1226/artifact/out/branch-findbugs-hadoop-hdds_server-scm.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1226/artifact/out/branch-findbugs-hadoop-ozone_client.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1226/artifact/out/branch-findbugs-hadoop-ozone_objectstore-service.txt
  [4.0K]
   

[jira] [Created] (HADOOP-16507) S3Guard fsck: Add option to configure severity (level) for the scan

2019-08-12 Thread Gabor Bota (JIRA)
Gabor Bota created HADOOP-16507:
---

 Summary: S3Guard fsck: Add option to configure severity (level) 
for the scan
 Key: HADOOP-16507
 URL: https://issues.apache.org/jira/browse/HADOOP-16507
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 3.3.0
Reporter: Gabor Bota


There's the severity of Violation (inconsistency) defined in 
{{org.apache.hadoop.fs.s3a.s3guard.S3GuardFsck.Violation}}. 

This flag is only for defining the severity of the Violation, but not used to 
filter the scan for issue severity.

The task to do: Use the severity level to define which issue should be logged 
and/or fixed during the scan. 
Note: the best way to avoid possible code duplication would be to not even add 
the consistency violation pair to the list of violations during the scan.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16506) Create proper documentation for MetricLinkedBlockingQueue

2019-08-12 Thread Jinglun (JIRA)
Jinglun created HADOOP-16506:


 Summary: Create proper documentation for MetricLinkedBlockingQueue
 Key: HADOOP-16506
 URL: https://issues.apache.org/jira/browse/HADOOP-16506
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Jinglun
Assignee: Jinglun


Add documentation for the MetricLinkedBlockingQueue. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: branch2+JDK7 on Linux/x86

2019-08-12 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/411/

No changes




-1 overall


The following subsystems voted -1:
asflicense compile findbugs hadolint mvninstall mvnsite pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/empty-configuration.xml
 
   hadoop-tools/hadoop-azure/src/config/checkstyle-suppressions.xml 
   hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/crossdomain.xml 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml
 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-client
 
   Boxed value is unboxed and then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:[line 335] 

Failed junit tests :

   hadoop.registry.secure.TestSecureLogins 
   hadoop.yarn.server.timelineservice.security.TestTimelineAuthFilterForV2 
   hadoop.yarn.client.api.impl.TestNMClient 
   hadoop.yarn.client.cli.TestRMAdminCLI 
   hadoop.mapred.TestLocalContainerLauncher 
  

   mvninstall:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/411/artifact/out/patch-mvninstall-root.txt
  [408K]

   compile:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/411/artifact/out/patch-compile-root-jdk1.7.0_95.txt
  [256K]

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/411/artifact/out/patch-compile-root-jdk1.7.0_95.txt
  [256K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/411/artifact/out/patch-compile-root-jdk1.7.0_95.txt
  [256K]

   compile:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/411/artifact/out/patch-compile-root-jdk1.8.0_212.txt
  [240K]

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/411/artifact/out/patch-compile-root-jdk1.8.0_212.txt
  [240K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/411/artifact/out/patch-compile-root-jdk1.8.0_212.txt
  [240K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/411/artifact/out/diff-checkstyle-root.txt
  [16M]

   hadolint:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/411/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   mvnsite:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/411/artifact/out/patch-mvnsite-root.txt
  [328K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/411/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/411/artifact/out/diff-patch-pylint.txt
  [24K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/411/artifact/out/diff-patch-shellcheck.txt
  [72K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/411/artifact/out/diff-patch-shelldocs.txt
  [8.0K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/411/artifact/out/whitespace-eol.txt
  [12M]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/411/artifact/out/whitespace-tabs.txt
  [1.2M]

   xml:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/411/artifact/out/xml.txt
  [12K]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/411/artifact/out/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/411/artifact/out/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs_src_contrib_bkjournal.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/411/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-hbase_hadoop-yarn-server-timelineservice-hbase-client-warnings.html
  [8.0K]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/411/artifact/out/diff-javadoc-javadoc-root-jdk1.7.0_95.txt
  [16K]
   

[jira] [Created] (HADOOP-16505) Use custom signature algorithm for `fs.s3a.signing-algorithm`

2019-08-12 Thread Saurav Verma (JIRA)
Saurav Verma created HADOOP-16505:
-

 Summary: Use custom signature algorithm for 
`fs.s3a.signing-algorithm`
 Key: HADOOP-16505
 URL: https://issues.apache.org/jira/browse/HADOOP-16505
 Project: Hadoop Common
  Issue Type: Improvement
  Components: hadoop-aws
Reporter: Saurav Verma


This would enable users to register and use custom `Signature algorithm` for 
aws s3a filesystem, passing it as value for {{fs.s3a.signing-algorithm}} in 
configuration.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org