Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86_64

2020-07-23 Thread Apache Jenkins Server
For more details, see 
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/213/

[Jul 23, 2020 11:11:35 AM] (Bibin Chundatt) YARN-10315. Avoid sending 
RMNodeResourceupdate event if resource is same. Contributed by Sushil Ks.




-1 overall


The following subsystems voted -1:
docker


Powered by Apache Yetushttps://yetus.apache.org

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org

Re: [VOTE] Release Apache Hadoop 3.1.4 (RC4)

2020-07-23 Thread Szilard Nemeth
+1 (binding).

**TEST STEPS**
1. Build from sources (see Maven / Java and OS details below)
2. Distribute Hadoop to all nodes
3. Start HDFS services + YARN services on nodes
4. Run Mapreduce pi job (QuasiMontecarlo)
5. Verifed that application was successful through YARN RM Web UI
6. Verified version of Hadoop release from YARN RM Web UI

**OS version**
$ cat /etc/os-release
NAME="CentOS Linux"
VERSION="7 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="7"
PRETTY_NAME="CentOS Linux 7 (Core)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:7"
HOME_URL="https://www.centos.org/";
BUG_REPORT_URL="https://bugs.centos.org/";

CENTOS_MANTISBT_PROJECT="CentOS-7"
CENTOS_MANTISBT_PROJECT_VERSION="7"
REDHAT_SUPPORT_PRODUCT="centos"
REDHAT_SUPPORT_PRODUCT_VERSION="7"

**Maven version**
$ mvn -v
Apache Maven 3.0.5 (Red Hat 3.0.5-17)
Maven home: /usr/share/maven

**Java version**
Java version: 1.8.0_191, vendor: Oracle Corporation
Java home: /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.191.b12-0.el7_5.x86_64/jre
Default locale: en_US, platform encoding: ANSI_X3.4-1968
OS name: "linux", version: "3.10.0-1062.el7.x86_64", arch: "amd64", family:
"unix"

**Maven command to build from sources**
mvn clean package -Pdist -DskipTests -Dmaven.javadoc.skip=true


**OTHER NOTES**
1. Had to manually install maven in order to manually compile Hadoop based
on these steps:
https://gist.github.com/miroslavtamas/cdca97f2eafdd6c28b844434eaa3b631

2. Had to manually install protoc + other required libraries with the
following commands (in this particular order):
sudo yum install -y protobuf-devel
sudo yum install -y gcc gcc-c++ make
sudo yum install -y openssl-devel
sudo yum install -y libgsasl


Thanks,
Szilard

On Thu, Jul 23, 2020 at 4:05 PM Masatake Iwasaki <
iwasak...@oss.nttdata.co.jp> wrote:

> +1 (binding).
>
> * verified the checksum and signature of the source tarball.
> * built from source tarball with native profile on CentOS 7 and OpenJDK 8.
> * built documentation and skimmed the contents.
> * ran example jobs on 3 nodes docker cluster with NN-HA and RM-HA enblaed.
> * launched pseudo-distributed cluster with Kerberos and SSL enabled, ran
> basic EZ operation, ran example MR jobs.
> * followed the reproduction step reported in  HDFS-15313 to see if the
> fix works.
>
> Thanks,
> Masatake Iwasaki
>
> On 2020/07/21 21:50, Gabor Bota wrote:
> > Hi folks,
> >
> > I have put together a release candidate (RC4) for Hadoop 3.1.4.
> >
> > *
> > The RC includes in addition to the previous ones:
> > * fix for HDFS-15313. Ensure inodes in active filesystem are not
> > deleted during snapshot delete
> > * fix for YARN-10347. Fix double locking in
> > CapacityScheduler#reinitialize in branch-3.1
> > (https://issues.apache.org/jira/browse/YARN-10347)
> > * the revert of HDFS-14941, as it caused
> > HDFS-15421. IBR leak causes standby NN to be stuck in safe mode.
> > (https://issues.apache.org/jira/browse/HDFS-15421)
> > * HDFS-15323, as requested.
> > (https://issues.apache.org/jira/browse/HDFS-15323)
> > *
> >
> > The RC is available at:
> http://people.apache.org/~gabota/hadoop-3.1.4-RC4/
> > The RC tag in git is here:
> > https://github.com/apache/hadoop/releases/tag/release-3.1.4-RC4
> > The maven artifacts are staged at
> > https://repository.apache.org/content/repositories/orgapachehadoop-1275/
> >
> > You can find my public key at:
> > https://dist.apache.org/repos/dist/release/hadoop/common/KEYS
> > and http://keys.gnupg.net/pks/lookup?op=get&search=0xB86249D83539B38C
> >
> > Please try the release and vote. The vote will run for 8 weekdays,
> > until July 31. 2020. 23:00 CET.
> >
> >
> > Thanks,
> > Gabor
> >
> > -
> > To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
> > For additional commands, e-mail: common-dev-h...@hadoop.apache.org
> >
>
> -
> To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org
>
>


Re: [ANNOUNCE] New Apache Hadoop PMC member : Ayush Saxena

2020-07-23 Thread HarshaKiran Reddy Boreddy
Congratulations Ayush Saxena..!!!

On Thu, Jul 23, 2020, 8:39 AM Xiaoqiao He  wrote:

> Congratulations Ayush!
>
> Regards,
> He Xiaoqiao
>
> On Thu, Jul 23, 2020 at 9:18 AM Sheng Liu  wrote:
>
> > Congrats, and thanks for your help.
> >
> > Sree Vaddi  于2020年7月23日周四 上午3:59写道:
> >
> > > Congratulations, Ayush.Keep up the good work.
> > >
> > >
> > > Thank you./Sree
> > >
> > >
> > >
> > > On Wednesday, July 22, 2020, 12:48:55 PM PDT, Team AMR <
> > > teamamr.apa...@gmail.com> wrote:
> > >
> > >  Congrats
> > >
> > >
> > > On Thu, Jul 23, 2020 at 1:10 AM Vinayakumar B  >
> > > wrote:
> > >
> > > > Hi all,
> > > >
> > > > I am very glad to announce that Ayush Saxena was voted to join Apache
> > > > Hadoop PMC.
> > > >
> > > > Congratulations Ayush! Well deserved and thank you for your
> dedication
> > to
> > > > the project. Please keep up the good work.
> > > >
> > > > Thanks,
> > > > -Vinay
> > > >
> > >
> >
>


Apache Hadoop qbt Report: branch2.10+JDK7 on Linux/x86

2020-07-23 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/756/

No changes




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint jshint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/empty-configuration.xml
 
   hadoop-tools/hadoop-azure/src/config/checkstyle-suppressions.xml 
   hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/crossdomain.xml 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml
 

findbugs :

   module:hadoop-yarn-project/hadoop-yarn 
   Boxed value is unboxed and then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:[line 335] 
   
org.apache.hadoop.yarn.state.StateMachineFactory.generateStateGraph(String) 
makes inefficient use of keySet iterator instead of entrySet iterator At 
StateMachineFactory.java:keySet iterator instead of entrySet iterator At 
StateMachineFactory.java:[line 505] 

findbugs :

   module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
   
org.apache.hadoop.yarn.state.StateMachineFactory.generateStateGraph(String) 
makes inefficient use of keySet iterator instead of entrySet iterator At 
StateMachineFactory.java:keySet iterator instead of entrySet iterator At 
StateMachineFactory.java:[line 505] 

findbugs :

   module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server 
   Boxed value is unboxed and then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:[line 335] 

findbugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase
 
   Boxed value is unboxed and then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:[line 335] 

findbugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-client
 
   Boxed value is unboxed and then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:[line 335] 

findbugs :

   module:hadoop-yarn-project 
   Boxed value is unboxed and then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:[line 335] 
   
org.apache.hadoop.yarn.state.StateMachineFactory.generateStateGraph(String) 
makes inefficient use of keySet iterator instead of entrySet iterator At 
StateMachineFactory.java:keySet iterator instead of entrySet iterator At 
StateMachineFactory.java:[line 505] 

findbugs :

   module:hadoop-mapreduce-project/hadoop-mapreduce-client 
   Primitive is boxed to call Long.compareTo(Long):Long.compareTo(Long): 
use Long.compare(long, long) instead At JVMId.java:[line 101] 
   Possible null pointer dereference in new 
org.apache.hadoop.mapred.LocalContainer

Re: [VOTE] Release Apache Hadoop 3.1.4 (RC4)

2020-07-23 Thread Masatake Iwasaki

+1 (binding).

* verified the checksum and signature of the source tarball.
* built from source tarball with native profile on CentOS 7 and OpenJDK 8.
* built documentation and skimmed the contents.
* ran example jobs on 3 nodes docker cluster with NN-HA and RM-HA enblaed.
* launched pseudo-distributed cluster with Kerberos and SSL enabled, ran 
basic EZ operation, ran example MR jobs.
* followed the reproduction step reported in  HDFS-15313 to see if the 
fix works.


Thanks,
Masatake Iwasaki

On 2020/07/21 21:50, Gabor Bota wrote:

Hi folks,

I have put together a release candidate (RC4) for Hadoop 3.1.4.

*
The RC includes in addition to the previous ones:
* fix for HDFS-15313. Ensure inodes in active filesystem are not
deleted during snapshot delete
* fix for YARN-10347. Fix double locking in
CapacityScheduler#reinitialize in branch-3.1
(https://issues.apache.org/jira/browse/YARN-10347)
* the revert of HDFS-14941, as it caused
HDFS-15421. IBR leak causes standby NN to be stuck in safe mode.
(https://issues.apache.org/jira/browse/HDFS-15421)
* HDFS-15323, as requested.
(https://issues.apache.org/jira/browse/HDFS-15323)
*

The RC is available at: http://people.apache.org/~gabota/hadoop-3.1.4-RC4/
The RC tag in git is here:
https://github.com/apache/hadoop/releases/tag/release-3.1.4-RC4
The maven artifacts are staged at
https://repository.apache.org/content/repositories/orgapachehadoop-1275/

You can find my public key at:
https://dist.apache.org/repos/dist/release/hadoop/common/KEYS
and http://keys.gnupg.net/pks/lookup?op=get&search=0xB86249D83539B38C

Please try the release and vote. The vote will run for 8 weekdays,
until July 31. 2020. 23:00 CET.


Thanks,
Gabor

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17152) Implement wrapper for guava newArrayList and newLinkedList

2020-07-23 Thread Ahmed Hussein (Jira)
Ahmed Hussein created HADOOP-17152:
--

 Summary: Implement wrapper for guava newArrayList and newLinkedList
 Key: HADOOP-17152
 URL: https://issues.apache.org/jira/browse/HADOOP-17152
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: common
Reporter: Ahmed Hussein
Assignee: Ahmed Hussein


guava Lists class provide some wrappers to java ArrayList and LinkedList.

Replacing the method calls throughout the code can be invasive because guava 
offers some APIs that do not exist in java util. This Jira is the task of 
implementing those missing APIs in hadoop common in a step toward getting rid 
of guava.
 * create a wrapper class org.apache.hadoop.util.unguava.Lists 
 * implement the following interfaces in Lists:
 ** public static  ArrayList newArrayList()
 ** public static  ArrayList newArrayList(E... elements)
 ** public static  ArrayList newArrayList(Iterable elements)
 ** public static  ArrayList newArrayList(Iterator elements)
 ** public static  ArrayList newArrayListWithCapacity(int 
initialArraySize)
 ** public static  LinkedList newLinkedList()
 ** public static  LinkedList newLinkedList(Iterable 
elements)
 ** public static  List asList(@Nullable E first, E[] rest)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17151) Need to upgrad the version of jetty

2020-07-23 Thread liusheng (Jira)
liusheng created HADOOP-17151:
-

 Summary: Need to upgrad the version of jetty
 Key: HADOOP-17151
 URL: https://issues.apache.org/jira/browse/HADOOP-17151
 Project: Hadoop Common
  Issue Type: Bug
Reporter: liusheng


I have tried to configure and start Hadoop KMS service, it was failed to start 
the error log messages:
{noformat}
2020-07-23 10:57:31,872 INFO  Server - jetty-9.4.20.v20190813; built: 
2019-08-13T21:28:18.144Z; git: 84700530e645e812b336747464d6fbbf370c9a20; jvm 
1.8.0_252-8u252-b09-1~18.04-b09
2020-07-23 10:57:31,899 INFO  session - DefaultSessionIdManager workerName=node0
2020-07-23 10:57:31,899 INFO  session - No SessionScavenger set, using defaults
2020-07-23 10:57:31,901 INFO  session - node0 Scavenging every 66ms
2020-07-23 10:57:31,912 INFO  ContextHandler - Started 
o.e.j.s.ServletContextHandler@5bf0d49{logs,/logs,file:///opt/hadoop-3.4.0-SNAPSHOT/logs/,AVAILABLE}
2020-07-23 10:57:31,913 INFO  ContextHandler - Started 
o.e.j.s.ServletContextHandler@7c7a06ec{static,/static,jar:file:/opt/hadoop-3.4.0-SNAPSHOT/share/hadoop/common/hadoop-kms-3.4.0-SNAPSHOT.jar!/webapps/static,AVAILABLE}
2020-07-23 10:57:31,986 INFO  TypeUtil - JVM Runtime does not support Modules
2020-07-23 10:57:32,015 INFO  KMSWebApp - 
-
2020-07-23 10:57:32,015 INFO  KMSWebApp -   Java runtime version : 
1.8.0_252-8u252-b09-1~18.04-b09
2020-07-23 10:57:32,015 INFO  KMSWebApp -   User: hadoop
2020-07-23 10:57:32,015 INFO  KMSWebApp -   KMS Hadoop Version: 3.4.0-SNAPSHOT
2020-07-23 10:57:32,015 INFO  KMSWebApp - 
-
2020-07-23 10:57:32,023 INFO  KMSACLs - 'CREATE' ACL '*'
2020-07-23 10:57:32,024 INFO  KMSACLs - 'DELETE' ACL '*'
2020-07-23 10:57:32,024 INFO  KMSACLs - 'ROLLOVER' ACL '*'
2020-07-23 10:57:32,024 INFO  KMSACLs - 'GET' ACL '*'
2020-07-23 10:57:32,024 INFO  KMSACLs - 'GET_KEYS' ACL '*'
2020-07-23 10:57:32,024 INFO  KMSACLs - 'GET_METADATA' ACL '*'
2020-07-23 10:57:32,024 INFO  KMSACLs - 'SET_KEY_MATERIAL' ACL '*'
2020-07-23 10:57:32,024 INFO  KMSACLs - 'GENERATE_EEK' ACL '*'
2020-07-23 10:57:32,024 INFO  KMSACLs - 'DECRYPT_EEK' ACL '*'
2020-07-23 10:57:32,025 INFO  KMSACLs - default.key.acl. for KEY_OP 'READ' is 
set to '*'
2020-07-23 10:57:32,025 INFO  KMSACLs - default.key.acl. for KEY_OP 
'MANAGEMENT' is set to '*'
2020-07-23 10:57:32,025 INFO  KMSACLs - default.key.acl. for KEY_OP 
'GENERATE_EEK' is set to '*'
2020-07-23 10:57:32,025 INFO  KMSACLs - default.key.acl. for KEY_OP 
'DECRYPT_EEK' is set to '*'
2020-07-23 10:57:32,080 INFO  KMSAudit - Initializing audit logger class 
org.apache.hadoop.crypto.key.kms.server.SimpleKMSAuditLogger
2020-07-23 10:57:32,537 INFO  KMSWebServer - SHUTDOWN_MSG:
/
SHUTDOWN_MSG: Shutting down KMSWebServer at 
hadoop-benchmark/172.17.0.2{noformat}
I have googled the error and found there is a simlar issue: 
[https://github.com/eclipse/jetty.project/issues/4064]

It looks like a bug of jetty and has  been fixed in jetty>=9.4.21, currently 
Hadoop use the jetty is version of 9.4.20, see hadoop-project/pom.xml.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17150) ABFS: Test failure: Disable ITestAzureBlobFileSystemDelegationSAS tests

2020-07-23 Thread Sneha Vijayarajan (Jira)
Sneha Vijayarajan created HADOOP-17150:
--

 Summary: ABFS: Test failure: Disable 
ITestAzureBlobFileSystemDelegationSAS tests
 Key: HADOOP-17150
 URL: https://issues.apache.org/jira/browse/HADOOP-17150
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Sneha Vijayarajan
Assignee: Sneha Vijayarajan


ITestAzureBlobFileSystemDelegationSAS has tests for the SAS feature in preview 
stage. The tests should not run until the API version reflects the one in 
preview as when run against production clusters they will fail.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17149) ABFS: Test failure: testFailedRequestWhenCredentialsNotCorrect fails when run with SharedKey

2020-07-23 Thread Sneha Vijayarajan (Jira)
Sneha Vijayarajan created HADOOP-17149:
--

 Summary: ABFS: Test failure: 
testFailedRequestWhenCredentialsNotCorrect fails when run with SharedKey
 Key: HADOOP-17149
 URL: https://issues.apache.org/jira/browse/HADOOP-17149
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/azure
Reporter: Sneha Vijayarajan
Assignee: Sneha Vijayarajan
 Fix For: 3.4.0


When authentication is set to SharedKey, below test fails.

 

[ERROR]   
ITestGetNameSpaceEnabled.testFailedRequestWhenCredentialsNotCorrect:161 
Expecting 
org.apache.hadoop.fs.azurebfs.contracts.exceptions.AbfsRestOperationException 
with text "Server failed to authenticate the request. Make sure the value of 
Authorization header is formed correctly including the signature.", 403 but got 
: "void"

 

2 problems:
 # This test should probably be disabled for SharedKey
 # Assert is wrong. Expected Http Status code should 401.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org