Apache Hadoop qbt Report: branch2+JDK7 on Linux/x86

2019-11-21 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/512/

[Nov 19, 2019 10:39:14 PM] (ericp) YARN-7411. Inter-Queue preemption's 
computeFixpointAllocation need to




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/empty-configuration.xml
 
   hadoop-tools/hadoop-azure/src/config/checkstyle-suppressions.xml 
   hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/crossdomain.xml 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml
 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-client
 
   Boxed value is unboxed and then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:[line 335] 

Failed junit tests :

   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys 
   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   hadoop.yarn.api.TestPBImplRecords 
   hadoop.registry.secure.TestSecureLogins 
   hadoop.yarn.server.timelineservice.security.TestTimelineAuthFilterForV2 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/512/artifact/out/diff-compile-cc-root-jdk1.7.0_95.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/512/artifact/out/diff-compile-javac-root-jdk1.7.0_95.txt
  [328K]

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/512/artifact/out/diff-compile-cc-root-jdk1.8.0_222.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/512/artifact/out/diff-compile-javac-root-jdk1.8.0_222.txt
  [308K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/512/artifact/out/diff-checkstyle-root.txt
  [16M]

   hadolint:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/512/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/512/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/512/artifact/out/diff-patch-pylint.txt
  [24K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/512/artifact/out/diff-patch-shellcheck.txt
  [72K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/512/artifact/out/diff-patch-shelldocs.txt
  [8.0K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/512/artifact/out/whitespace-eol.txt
  [12M]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/512/artifact/out/whitespace-tabs.txt
  [1.3M]

   xml:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/512/artifact/out/xml.txt
  [12K]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/512/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-hbase_hadoop-yarn-server-timelineservice-hbase-client-warnings.html
  [8.0K]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/512/artifact/out/diff-javadoc-javadoc-root-jdk1.7.0_95.txt
  [16K]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/512/artifact/out/diff-javadoc-javadoc-root-jdk1.8.0_222.txt
  [1.1M]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/512/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [236K]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/512/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs_src_contrib_bkjournal.txt
  [12K]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/512/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt
  [96K]
   

Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2019-11-21 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1327/

[Nov 20, 2019 7:18:56 AM] (aajisaka) HADOOP-16719. Remove the disallowed 
element config within
[Nov 20, 2019 2:32:12 PM] (github) HADOOP-16707. NPE in UGI.getCurrentUser in 
ITestAbfsIdentityTransformer
[Nov 20, 2019 3:56:33 PM] (xkrogen) HADOOP-16700. Track response time 
separately in Server.Call to avoid
[Nov 20, 2019 6:30:42 PM] (surendralilhore) HDFS-14995. Use log variable 
directly instead of passing as argument in
[Nov 20, 2019 6:51:58 PM] (inigoiri) HDFS-14949. Add getServerDefaults() 
support to HttpFS. Contributed by
[Nov 20, 2019 9:07:43 PM] (weichiu) HADOOP-15686. Supress bogus 
AbstractWadlGeneratorGrammarGenerator in KMS
[Nov 20, 2019 10:40:10 PM] (weichiu) HADOOP-15852. Refactor QuotaUsage. 
Contributed by David Mollitor.
[Nov 21, 2019 12:44:44 AM] (weichiu) HADOOP-16276. Fix jsvc startup command in 
hadoop-functions.sh due to




-1 overall


The following subsystems voted -1:
asflicense findbugs pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core
 
   Class org.apache.hadoop.applications.mawo.server.common.TaskStatus 
implements Cloneable but does not define or use clone method At 
TaskStatus.java:does not define or use clone method At TaskStatus.java:[lines 
39-346] 
   Equals method for 
org.apache.hadoop.applications.mawo.server.worker.WorkerId assumes the argument 
is of type WorkerId At WorkerId.java:the argument is of type WorkerId At 
WorkerId.java:[line 114] 
   
org.apache.hadoop.applications.mawo.server.worker.WorkerId.equals(Object) does 
not check for null argument At WorkerId.java:null argument At 
WorkerId.java:[lines 114-115] 

FindBugs :

   module:hadoop-cloud-storage-project/hadoop-cos 
   Redundant nullcheck of dir, which is known to be non-null in 
org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at 
BufferPool.java:is known to be non-null in 
org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at 
BufferPool.java:[line 66] 
   org.apache.hadoop.fs.cosn.CosNInputStream$ReadBuffer.getBuffer() may 
expose internal representation by returning CosNInputStream$ReadBuffer.buffer 
At CosNInputStream.java:by returning CosNInputStream$ReadBuffer.buffer At 
CosNInputStream.java:[line 87] 
   Found reliance on default encoding in 
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, 
byte[]):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, 
File, byte[]): new String(byte[]) At CosNativeFileSystemStore.java:[line 199] 
   Found reliance on default encoding in 
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, 
InputStream, byte[], long):in 
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, 
InputStream, byte[], long): new String(byte[]) At 
CosNativeFileSystemStore.java:[line 178] 
   org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.uploadPart(File, 
String, String, int) may fail to clean up java.io.InputStream Obligation to 
clean up resource created at CosNativeFileSystemStore.java:fail to clean up 
java.io.InputStream Obligation to clean up resource created at 
CosNativeFileSystemStore.java:[line 252] is not discharged 

Failed junit tests :

   hadoop.hdfs.server.namenode.TestRedudantBlocks 
   hadoop.yarn.server.webproxy.amfilter.TestAmFilter 
   hadoop.yarn.server.webproxy.TestWebAppProxyServlet 
   
hadoop.yarn.server.resourcemanager.scheduler.fair.TestContinuousScheduling 
   
hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorageDomain 
   

Re: [DISCUSS] Making 2.10 the last minor 2.x release

2019-11-21 Thread Jonathan Hung
Thanks for the detailed thoughts, everyone.

Eric (Badger), my understanding is the same as yours re. minor vs patch
releases. As for putting features into minor/patch releases, if we keep the
convention of putting new features only into minor releases, my assumption
is still that it's unlikely people will want to get them into branch-2
(based on the 2.10.0 release process). For the java 11 issue, we haven't
even really removed support for java 7 in branch-2 (much less java 8), so I
feel moving to java 11 would go along with a move to branch 3. And as you
mentioned, if people really want to use java 11 on branch-2, we can always
revive branch-2. But for now I think the convenience of not needing to port
to both branch-2 and branch-2.10 (and below) outweighs the cost of
potentially needing to revive branch-2.

Jonathan Hung


On Wed, Nov 20, 2019 at 10:50 AM Eric Yang  wrote:

> +1 for 2.10.x as last release for 2.x version.
>
> Software would become more compatible when more companies stress test the
> same software and making improvements in trunk.  Some may be extra caution
> on moving up the version because obligation internally to keep things
> running.  Company obligation should not be the driving force to maintain
> Hadoop branches.  There is no proper collaboration in the community when
> every name brand company maintains its own Hadoop 2.x version.  I think it
> would be more healthy for the community to reduce the branch forking and
> spend energy on trunk to harden the software.  This will give more
> confidence to move up the version than trying to fix n permutations
> breakage like Flash fixing the timeline.
>
> Apache license stated, there is no warranty of any kind for code
> contributions.  Fewer community release process should improve software
> quality when eyes are on trunk, and help steering toward the same end goals.
>
> regards,
> Eric
>
>
>
> On Tue, Nov 19, 2019 at 3:03 PM Eric Badger
>  wrote:
>
>> Hello all,
>>
>> Is it written anywhere what the difference is between a minor release and
>> a
>> point/dot/maintenance (I'll use "point" from here on out) release? I have
>> looked around and I can't find anything other than some compatibility
>> documentation in 2.x that has since been removed in 3.x [1] [2]. I think
>> this would help shape my opinion on whether or not to keep branch-2 alive.
>> My current understanding is that we can't really break compatibility in
>> either a minor or point release. But the only mention of the difference
>> between minor and point releases is how to deal with Stable, Evolving, and
>> Unstable tags, and how to deal with changing default configuration values.
>> So it seems like there really isn't a big official difference between the
>> two. In my mind, the functional difference between the two is that the
>> minor releases may have added features and rewrites, while the point
>> releases only have bug fixes. This might be an incorrect understanding,
>> but
>> that's what I have gathered from watching the releases over the last few
>> years. Whether or not this is a correct understanding, I think that this
>> needs to be documented somewhere, even if it is just a convention.
>>
>> Given my assumed understanding of minor vs point releases, here are the
>> pros/cons that I can think of for having a branch-2. Please add on or
>> correct me for anything you feel is missing or inadequate.
>> Pros:
>> - Features/rewrites/higher-risk patches are less likely to be put into
>> 2.10.x
>> - It is less necessary to move to 3.x
>>
>> Cons:
>> - Bug fixes are less likely to be put into 2.10.x
>> - An extra branch to maintain
>>   - Committers have an extra branch (5 vs 4 total branches) to commit
>> patches to if they should go all the way back to 2.10.x
>> - It is less necessary to move to 3.x
>>
>> So on the one hand you get added stability in fewer features being
>> committed to 2.10.x, but then on the other you get fewer bug fixes being
>> committed. In a perfect world, we wouldn't have to make this tradeoff. But
>> we don't live in a perfect world and committers will make mistakes either
>> because of lack of knowledge or simply because they made a mistake. If we
>> have a branch-2, committers will forget, not know to, or choose not to
>> (for
>> whatever reason) commit valid bug fixes back all the way to branch-2.10.
>> If
>> we don't have a branch-2, committers who want their borderline risky
>> feature in the 2.x line will err on the side of putting it into
>> branch-2.10
>> instead of proposing the creation of a branch-2. Clearly I have made quite
>> a few assumptions here based on my own experiences, so I would like to
>> hear
>> if others have similar or opposing views.
>>
>> As far as 3.x goes, to me it seems like some of the reasoning for killing
>> branch-2 is due to an effort to push the community towards 3.x. This is
>> why
>> I have added movement to 3.x as both a pro and a con. As a community
>> trying
>> to move forward, keeping as many