Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2019-10-08 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1283/

[Oct 7, 2019 4:04:36 AM] (shashikant) HDDS-2169. Avoid buffer copies while 
submitting client requests in
[Oct 7, 2019 7:38:08 AM] (aajisaka) HADOOP-16512. [hadoop-tools] Fix order of 
actual and expected expression
[Oct 7, 2019 9:35:39 AM] (elek) HDDS-2252. Enable gdpr robot test in daily build
[Oct 7, 2019 12:07:46 PM] (stevel) HADOOP-16587. Make ABFS AAD endpoints 
configurable.
[Oct 7, 2019 5:17:25 PM] (bharat) HDDS-2239. Fix TestOzoneFsHAUrls (#1600)
[Oct 7, 2019 6:44:30 PM] (surendralilhore) HDFS-14373. EC : Decoding is failing 
when block group last incomplete
[Oct 7, 2019 8:59:49 PM] (aengineer) HDDS-2238. Container Data Scrubber spams 
log in empty cluster
[Oct 7, 2019 9:10:57 PM] (aengineer) HDDS-2264. Improve output of 
TestOzoneContainer
[Oct 7, 2019 9:30:23 PM] (aengineer) HDDS-2259. Container Data Scrubber 
computes wrong checksum
[Oct 7, 2019 9:38:54 PM] (aengineer) HDDS-2262. SLEEP_SECONDS: command not found
[Oct 7, 2019 10:41:42 PM] (aengineer) HDDS-2245. Use dynamic ports for SCM in 
TestSecureOzoneCluster




-1 overall


The following subsystems voted -1:
asflicense compile findbugs hadolint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core
 
   Class org.apache.hadoop.applications.mawo.server.common.TaskStatus 
implements Cloneable but does not define or use clone method At 
TaskStatus.java:does not define or use clone method At TaskStatus.java:[lines 
39-346] 
   Equals method for 
org.apache.hadoop.applications.mawo.server.worker.WorkerId assumes the argument 
is of type WorkerId At WorkerId.java:the argument is of type WorkerId At 
WorkerId.java:[line 114] 
   
org.apache.hadoop.applications.mawo.server.worker.WorkerId.equals(Object) does 
not check for null argument At WorkerId.java:null argument At 
WorkerId.java:[lines 114-115] 

FindBugs :

   module:hadoop-cloud-storage-project/hadoop-cos 
   Redundant nullcheck of dir, which is known to be non-null in 
org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at 
BufferPool.java:is known to be non-null in 
org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at 
BufferPool.java:[line 66] 
   org.apache.hadoop.fs.cosn.CosNInputStream$ReadBuffer.getBuffer() may 
expose internal representation by returning CosNInputStream$ReadBuffer.buffer 
At CosNInputStream.java:by returning CosNInputStream$ReadBuffer.buffer At 
CosNInputStream.java:[line 87] 
   Found reliance on default encoding in 
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, 
byte[]):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, 
File, byte[]): new String(byte[]) At CosNativeFileSystemStore.java:[line 199] 
   Found reliance on default encoding in 
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, 
InputStream, byte[], long):in 
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, 
InputStream, byte[], long): new String(byte[]) At 
CosNativeFileSystemStore.java:[line 178] 
   org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.uploadPart(File, 
String, String, int) may fail to clean up java.io.InputStream Obligation to 
clean up resource created at CosNativeFileSystemStore.java:fail to clean up 
java.io.InputStream Obligation to clean up resource created at 
CosNativeFileSystemStore.java:[line 252] is not discharged 

FindBugs :

   module:hadoop-ozone/csi 
   Useless control flow in 
csi.v1.Csi$CapacityRange$Builder.maybeForceBuilderInitialization() At Csi.java: 
At Csi.java:[line 15977] 
   Class csi.v1.Csi$ControllerExpandVolumeRequest defines non-transient 
non-serializable instance field secrets_ In Csi.java:instance field secrets_ In 
Csi.java 
   Useless control flow in 
csi.v1.Csi$ControllerExpandVolumeRequest$Builder.maybeForceBuilderInitialization()
 At Csi.java: At Csi.java:[line 

Re: [DISCUSS] Release Docs pointers Hadoop site

2019-10-08 Thread Elek, Marton

To be honest, I have no idea. I don't know about the historical meaning.

But as there is no other feedback, here are my guesses based on pure logic:

 * current -> should point to the release with the highest number (3.2.1)
 * stable -> to the stable 3.x release with the highest number (3.2.1 
as of now)


current2 -> latest 2.x release
stable2 -> latest stable 2.x release

>> 1. But if the release manager of 3.1 line thinks 3.1.3 is stable, 
and 3.2

>> line is also in stable state, which release should get precedence to be
>> called as *stable* in any release line (2.x or 3.x) ?

It depends if stable2 = (second highest stable) or (stable from the 2.x 
line). I think the second meaning is more reasonable.


>> 3.1.3 is getting released now, could
>> http://hadoop.apache.org/docs/current/ shall be updated to 3.1.3 ? is it
>> the norms ?

No. As the stable should point to the highest stable, not to the stable 
which was released recently.


Marton

On 9/30/19 10:09 AM, Sunil Govindan wrote:

Bumping up this thread again for feedback.
@Zhankun Tang   is now waiting for a confirmation to
complete 3.1.3 release publish activities.

- Sunil

On Fri, Sep 27, 2019 at 11:03 AM Sunil Govindan  wrote:


Hi Folks,

At present,
http://hadoop.apache.org/docs/stable/  points to *Apache Hadoop 3.2.1*
http://hadoop.apache.org/docs/current/ points to *Apache Hadoop 3.2.1*
http://hadoop.apache.org/docs/stable2/  points to *Apache Hadoop 2.9.2*
http://hadoop.apache.org/docs/current2/ points to *Apache Hadoop 2.9.2*

3.2.1 is released last day. *Now 3.1.3 has completed voting* and it is in
the final stages of staging
As per me,
a) 3.2.1 will be still be pointing to
http://hadoop.apache.org/docs/stable/ ?
b) 3.1.3 should be pointing to http://hadoop.apache.org/docs/current/ ?

Now my questions,
1. But if the release manager of 3.1 line thinks 3.1.3 is stable, and 3.2
line is also in stable state, which release should get precedence to be
called as *stable* in any release line (2.x or 3.x) ?
or do we need a vote or discuss thread to decide which release shall be
called as stable per release line?
2. Given 3.2.1 is released and pointing to 3.2.1 as stable, then when
3.1.3 is getting released now, could
http://hadoop.apache.org/docs/current/ shall be updated to 3.1.3 ? is it
the norms ?

Thanks
Sunil





-
To unsubscribe, e-mail: mapreduce-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: mapreduce-dev-h...@hadoop.apache.org