[jira] [Created] (HADOOP-16871) Upgrade Netty version to 4.1.45.Final

2020-02-18 Thread Aray Chenchu Sukesh (Jira)
Aray Chenchu Sukesh created HADOOP-16871:


 Summary: Upgrade Netty version to 4.1.45.Final
 Key: HADOOP-16871
 URL: https://issues.apache.org/jira/browse/HADOOP-16871
 Project: Hadoop Common
  Issue Type: Task
Affects Versions: 3.3.0
Reporter: Aray Chenchu Sukesh
 Fix For: 3.3.0


[CVE-2019-20444 
|[https://rnd-vulncenter.huawei.com/vuln/toViewOfficialDetail?cveId=CVE-2019-20444]]

[CVE-2019-16869|[https://rnd-vulncenter.huawei.com/vuln/toViewOfficialDetail?cveId=CVE-2019-16869]]

We should upgrade the netty dependency to 4.1.45.Final version



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16870) Use spotbugs-maven-plugin instead of findbugs-maven-plugin

2020-02-18 Thread Akira Ajisaka (Jira)
Akira Ajisaka created HADOOP-16870:
--

 Summary: Use spotbugs-maven-plugin instead of findbugs-maven-plugin
 Key: HADOOP-16870
 URL: https://issues.apache.org/jira/browse/HADOOP-16870
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Reporter: Akira Ajisaka


findbugs-maven-plugin is no longer maintained. Use spotbugs-maven-plugin 
instead.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16869) mvn findbugs:findbugs fails

2020-02-18 Thread Akira Ajisaka (Jira)
Akira Ajisaka created HADOOP-16869:
--

 Summary: mvn findbugs:findbugs fails
 Key: HADOOP-16869
 URL: https://issues.apache.org/jira/browse/HADOOP-16869
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Akira Ajisaka


mvn findbugs:findbugs is failing:
{noformat}
[ERROR] Failed to execute goal 
org.codehaus.mojo:findbugs-maven-plugin:3.0.0:findbugs (default-cli) on project 
hadoop-project: Unable to parse configuration of mojo 
org.codehaus.mojo:findbugs-maven-plugin:3.0.0:findbugs for parameter 
pluginArtifacts: Cannot assign configuration entry 'pluginArtifacts' with value 
'${plugin.artifacts}' of type 
java.util.Collections.UnmodifiableRandomAccessList to property of type 
java.util.ArrayList -> [Help 1]
 {noformat}

We have to update the version of findbugs-maven-plugin.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [DISCUSS] EOL Hadoop branch-2.8

2020-02-18 Thread Akira Ajisaka
Thanks Wei-Chiu for starting the discussion,

+1 for the EoL.

-Akira

On Tue, Feb 18, 2020 at 4:59 PM Ayush Saxena  wrote:

> Thanx Wei-Chiu for initiating this
> +1 for marking 2.8 EOL
>
> -Ayush
>
> > On 17-Feb-2020, at 11:14 PM, Wei-Chiu Chuang  wrote:
> >
> > The last Hadoop 2.8.x release, 2.8.5, was GA on September 15th 2018.
> >
> > It's been 17 months since the release and the community by and large have
> > moved up to 2.9/2.10/3.x.
> >
> > With Hadoop 3.3.0 over the horizon, is it time to start the EOL
> discussion
> > and reduce the number of active branches?
>
> -
> To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: common-dev-h...@hadoop.apache.org
>
>


[jira] [Created] (HADOOP-16868) ipc.Server readAndProcess threw NullPointerException

2020-02-18 Thread Tsz-wo Sze (Jira)
Tsz-wo Sze created HADOOP-16868:
---

 Summary: ipc.Server readAndProcess threw NullPointerException
 Key: HADOOP-16868
 URL: https://issues.apache.org/jira/browse/HADOOP-16868
 Project: Hadoop Common
  Issue Type: Bug
  Components: rpc-server
Reporter: Tsz-wo Sze


{code}
2020-01-18 10:19:02,109 INFO org.apache.hadoop.ipc.Server: Socket Reader #1 for 
port 8020: readAndProcess from client xx.xx.xx.xx threw exception 
[java.lang.NullPointerException]
java.lang.NullPointerException
at 
org.apache.hadoop.ipc.Server$Connection.readAndProcess(Server.java:1676)
at org.apache.hadoop.ipc.Server$Listener.doRead(Server.java:935)
at 
org.apache.hadoop.ipc.Server$Listener$Reader.doRunLoop(Server.java:791)
at org.apache.hadoop.ipc.Server$Listener$Reader.run(Server.java:762)
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2020-02-18 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1414/

[Feb 17, 2020 6:55:10 AM] (github) HDFS-15173. RBF: Delete repeated 
configuration
[Feb 17, 2020 7:13:33 PM] (ayushsaxena) HADOOP-13666. Supporting rack exclusion 
in countNumOfAvailableNodes in
[Feb 17, 2020 10:06:34 PM] (stevel) HADOOP-15961. S3A committers: make sure 
there's regular progress()
[Feb 17, 2020 10:14:39 PM] (github) HADOOP-16759. FileSystem Javadocs to list 
what breaks on API changes




-1 overall


The following subsystems voted -1:
asflicense findbugs pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core
 
   Class org.apache.hadoop.applications.mawo.server.common.TaskStatus 
implements Cloneable but does not define or use clone method At 
TaskStatus.java:does not define or use clone method At TaskStatus.java:[lines 
39-346] 
   Equals method for 
org.apache.hadoop.applications.mawo.server.worker.WorkerId assumes the argument 
is of type WorkerId At WorkerId.java:the argument is of type WorkerId At 
WorkerId.java:[line 114] 
   
org.apache.hadoop.applications.mawo.server.worker.WorkerId.equals(Object) does 
not check for null argument At WorkerId.java:null argument At 
WorkerId.java:[lines 114-115] 

FindBugs :

   module:hadoop-cloud-storage-project/hadoop-cos 
   Redundant nullcheck of dir, which is known to be non-null in 
org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at 
BufferPool.java:is known to be non-null in 
org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at 
BufferPool.java:[line 66] 
   org.apache.hadoop.fs.cosn.CosNInputStream$ReadBuffer.getBuffer() may 
expose internal representation by returning CosNInputStream$ReadBuffer.buffer 
At CosNInputStream.java:by returning CosNInputStream$ReadBuffer.buffer At 
CosNInputStream.java:[line 87] 
   Found reliance on default encoding in 
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, 
byte[]):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, 
File, byte[]): new String(byte[]) At CosNativeFileSystemStore.java:[line 199] 
   Found reliance on default encoding in 
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, 
InputStream, byte[], long):in 
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, 
InputStream, byte[], long): new String(byte[]) At 
CosNativeFileSystemStore.java:[line 178] 
   org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.uploadPart(File, 
String, String, int) may fail to clean up java.io.InputStream Obligation to 
clean up resource created at CosNativeFileSystemStore.java:fail to clean up 
java.io.InputStream Obligation to clean up resource created at 
CosNativeFileSystemStore.java:[line 252] is not discharged 

Failed junit tests :

   hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped 
   hadoop.hdfs.server.federation.router.TestRouterFaultTolerant 
   hadoop.yarn.applications.distributedshell.TestDistributedShell 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1414/artifact/out/diff-compile-cc-root.txt
  [8.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1414/artifact/out/diff-compile-javac-root.txt
  [428K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1414/artifact/out/diff-checkstyle-root.txt
  [16M]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1414/artifact/out/pathlen.txt
  [12K]

   pylint:

   

Apache Hadoop qbt Report: branch2.10+JDK7 on Linux/x86

2020-02-18 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/600/

[Feb 17, 2020 9:06:00 PM] (kihwal) Revert "HDFS-11156. Add new op 
GETFILEBLOCKLOCATIONS to WebHDFS REST
[Feb 17, 2020 9:49:48 PM] (kihwal) HDFS-12459. Fix revert: Add new op 
GETFILEBLOCKLOCATIONS to WebHDFS REST




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/empty-configuration.xml
 
   hadoop-tools/hadoop-azure/src/config/checkstyle-suppressions.xml 
   hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/crossdomain.xml 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml
 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-client
 
   Boxed value is unboxed and then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:[line 335] 

Failed junit tests :

   hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys 
   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   hadoop.fs.http.client.TestHttpFSFWithWebhdfsFileSystem 
   hadoop.fs.http.client.TestHttpFSFWithSWebhdfsFileSystem 
   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   hadoop.registry.secure.TestSecureLogins 
   hadoop.yarn.server.timelineservice.security.TestTimelineAuthFilterForV2 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/600/artifact/out/diff-compile-cc-root-jdk1.7.0_95.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/600/artifact/out/diff-compile-javac-root-jdk1.7.0_95.txt
  [328K]

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/600/artifact/out/diff-compile-cc-root-jdk1.8.0_242.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/600/artifact/out/diff-compile-javac-root-jdk1.8.0_242.txt
  [308K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/600/artifact/out/diff-checkstyle-root.txt
  [16M]

   hadolint:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/600/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/600/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/600/artifact/out/diff-patch-pylint.txt
  [24K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/600/artifact/out/diff-patch-shellcheck.txt
  [56K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/600/artifact/out/diff-patch-shelldocs.txt
  [8.0K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/600/artifact/out/whitespace-eol.txt
  [12M]
   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/600/artifact/out/whitespace-tabs.txt
  [1.3M]

   xml:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/600/artifact/out/xml.txt
  [12K]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/600/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-hbase_hadoop-yarn-server-timelineservice-hbase-client-warnings.html
  [8.0K]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/600/artifact/out/diff-javadoc-javadoc-root-jdk1.7.0_95.txt
  [16K]
   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/600/artifact/out/diff-javadoc-javadoc-root-jdk1.8.0_242.txt
  [1.1M]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/600/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [232K]
   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/600/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-httpfs.txt
  [32K]
   

[GitHub] [hadoop-thirdparty] smengcl opened a new pull request #5: HADOOP-16867. [thirdparty] Add shaded JargerTracer

2020-02-18 Thread GitBox
smengcl opened a new pull request #5: HADOOP-16867. [thirdparty] Add shaded 
JargerTracer
URL: https://github.com/apache/hadoop-thirdparty/pull/5
 
 
   Add artifact hadoop-shaded-jaeger to hadoop-thirdparty for OpenTracing work 
(HADOOP-15566).
   
   With this commit in thirdparty, Hadoop trunk with HADOOP-15566 can pass the 
enforcer check.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16867) [thirdparty] Add shaded JargerTracer

2020-02-18 Thread Siyao Meng (Jira)
Siyao Meng created HADOOP-16867:
---

 Summary: [thirdparty] Add shaded JargerTracer
 Key: HADOOP-16867
 URL: https://issues.apache.org/jira/browse/HADOOP-16867
 Project: Hadoop Common
  Issue Type: Task
Reporter: Siyao Meng
Assignee: Siyao Meng


Add artifact {{hadoop-shaded-jaeger}} to {{hadoop-thirdparty}} for OpenTracing 
work in HADOOP-15566.

CC [~weichiu]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [DISCUSS] EOL Hadoop branch-2.8

2020-02-18 Thread Ayush Saxena
Thanx Wei-Chiu for initiating this
+1 for marking 2.8 EOL

-Ayush

> On 17-Feb-2020, at 11:14 PM, Wei-Chiu Chuang  wrote:
> 
> The last Hadoop 2.8.x release, 2.8.5, was GA on September 15th 2018.
> 
> It's been 17 months since the release and the community by and large have
> moved up to 2.9/2.10/3.x.
> 
> With Hadoop 3.3.0 over the horizon, is it time to start the EOL discussion
> and reduce the number of active branches?

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org