Apache Hadoop qbt Report: branch2+JDK7 on Linux/x86

2019-12-09 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/531/

No changes

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org

Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2019-12-09 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1346/

[Dec 9, 2019 6:46:08 AM] (iwasakims) Bump bower from 1.7.7 to 1.8.8 (#1683)
[Dec 9, 2019 9:37:34 AM] (aajisaka) YARN-9985. Unsupported transitionToObserver 
option displaying for
[Dec 10, 2019 1:30:57 AM] (github) HDFS-14522. Allow compact property 
description in xml in httpfs. (#1737)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org

Re: [DISCUSS] Making 2.10 the last minor 2.x release

2019-12-09 Thread Jonathan Hung
It's done. The new commit chain is: trunk -> branch-3.2 -> branch-3.1 ->
branch-2.10 -> branch-2.9 -> branch-2.8 (branch-2 no longer exists, please
don't try to commit to it)

Completed procedure:

   - Verified everything in old branch-2.10 was in old branch-2
   - Delete old branch-2.10
   - Rename branch-2 to (new) branch-2.10
   - Set version in new branch-2.10 to 2.10.1-SNAPSHOT
   - Renamed fix versions from 2.11.0 to 2.10.1
   - Removed 2.11.0 as a version in HADOOP/YARN/HDFS/MAPREDUCE


Jonathan Hung


On Wed, Dec 4, 2019 at 10:55 AM Jonathan Hung  wrote:

> FYI, starting the rename process, beginning with INFRA-19521.
>
> Jonathan Hung
>
>
> On Wed, Nov 27, 2019 at 12:15 PM Konstantin Shvachko 
> wrote:
>
>> Hey guys,
>>
>> I think we diverged a bit from the initial topic of this discussion,
>> which is removing branch-2.10, and changing the version of branch-2 from
>> 2.11.0-SNAPSHOT to 2.10.1-SNAPSHOT.
>> Sounds like the subject line for this thread "Making 2.10 the last minor
>> 2.x release" confused people.
>> It is in fact a wider matter that can be discussed when somebody actually
>> proposes to release 2.11, which I understand nobody does at the moment.
>>
>> So if anybody objects removing branch-2.10 please make an argument.
>> Otherwise we should go ahead and just do it next week.
>> I see people still struggling to keep branch-2 and branch-2.10 in sync.
>>
>> Thanks,
>> --Konstantin
>>
>> On Thu, Nov 21, 2019 at 3:49 PM Jonathan Hung 
>> wrote:
>>
>>> Thanks for the detailed thoughts, everyone.
>>>
>>> Eric (Badger), my understanding is the same as yours re. minor vs patch
>>> releases. As for putting features into minor/patch releases, if we keep the
>>> convention of putting new features only into minor releases, my assumption
>>> is still that it's unlikely people will want to get them into branch-2
>>> (based on the 2.10.0 release process). For the java 11 issue, we haven't
>>> even really removed support for java 7 in branch-2 (much less java 8), so I
>>> feel moving to java 11 would go along with a move to branch 3. And as you
>>> mentioned, if people really want to use java 11 on branch-2, we can always
>>> revive branch-2. But for now I think the convenience of not needing to port
>>> to both branch-2 and branch-2.10 (and below) outweighs the cost of
>>> potentially needing to revive branch-2.
>>>
>>> Jonathan Hung
>>>
>>>
>>> On Wed, Nov 20, 2019 at 10:50 AM Eric Yang  wrote:
>>>
 +1 for 2.10.x as last release for 2.x version.

 Software would become more compatible when more companies stress test
 the same software and making improvements in trunk.  Some may be extra
 caution on moving up the version because obligation internally to keep
 things running.  Company obligation should not be the driving force to
 maintain Hadoop branches.  There is no proper collaboration in the
 community when every name brand company maintains its own Hadoop 2.x
 version.  I think it would be more healthy for the community to reduce the
 branch forking and spend energy on trunk to harden the software.  This will
 give more confidence to move up the version than trying to fix n
 permutations breakage like Flash fixing the timeline.

 Apache license stated, there is no warranty of any kind for code
 contributions.  Fewer community release process should improve software
 quality when eyes are on trunk, and help steering toward the same end 
 goals.

 regards,
 Eric



 On Tue, Nov 19, 2019 at 3:03 PM Eric Badger
  wrote:

> Hello all,
>
> Is it written anywhere what the difference is between a minor release
> and a
> point/dot/maintenance (I'll use "point" from here on out) release? I
> have
> looked around and I can't find anything other than some compatibility
> documentation in 2.x that has since been removed in 3.x [1] [2]. I
> think
> this would help shape my opinion on whether or not to keep branch-2
> alive.
> My current understanding is that we can't really break compatibility in
> either a minor or point release. But the only mention of the difference
> between minor and point releases is how to deal with Stable, Evolving,
> and
> Unstable tags, and how to deal with changing default configuration
> values.
> So it seems like there really isn't a big official difference between
> the
> two. In my mind, the functional difference between the two is that the
> minor releases may have added features and rewrites, while the point
> releases only have bug fixes. This might be an incorrect
> understanding, but
> that's what I have gathered from watching the releases over the last
> few
> years. Whether or not this is a correct understanding, I think that
> this
> needs to be documented somewhere, even if it is just a convention.
>
> Given my assumed understanding of minor vs po

Re: [DISCUSS] Ozone 0.4.2 release

2019-12-09 Thread Dinesh Chitlangia
Thank you all for your positive feedback.

We will create a 0.4.2 branch and share more updates as we make progress.

On Sat, Dec 7, 2019 at 7:32 PM Bharat Viswanadham  wrote:

> +1
>
> Thanks,
> Bharat
>
>
> On Sat, Dec 7, 2019 at 1:18 PM Giovanni Matteo Fumarola <
> giovanni.fumar...@gmail.com> wrote:
>
> > +1
> >
> > Thanks for starting this.
> >
> > On Sat, Dec 7, 2019 at 1:13 PM Jitendra Pandey
> >  wrote:
> >
> > > +1
> > >
> > >
> > > > On Dec 7, 2019, at 9:13 AM, Arpit Agarwal
> > 
> > > wrote:
> > > >
> > > > +1
> > > >
> > > >
> > > >
> > > >> On Dec 6, 2019, at 5:25 PM, Dinesh Chitlangia <
> dineshc@gmail.com>
> > > wrote:
> > > >>
> > > >> All,
> > > >> Since the Apache Hadoop Ozone 0.4.1 release, we have had significant
> > > >> bug fixes towards performance & stability.
> > > >>
> > > >> With that in mind, 0.4.2 release would be good to consolidate all
> > those
> > > fixes.
> > > >>
> > > >> Pls share your thoughts.
> > > >>
> > > >>
> > > >> Thanks,
> > > >> Dinesh Chitlangia
> > > >
> > > >
> > > > -
> > > > To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
> > > > For additional commands, e-mail: common-dev-h...@hadoop.apache.org
> > > >
> > >
> > > -
> > > To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
> > > For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
> > >
> > >
> >
>


Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2019-12-09 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1345/

[Dec 8, 2019 11:52:17 PM] (jhung) YARN-10012. Guaranteed and max capacity queue 
metrics for custom
[Dec 9, 2019 12:34:46 AM] (jhung) Revert "YARN-10012. Guaranteed and max 
capacity queue metrics for custom
[Dec 9, 2019 12:35:02 AM] (jhung) YARN-10012. Guaranteed and max capacity queue 
metrics for custom
[Dec 9, 2019 1:14:44 AM] (aajisaka) Bump nimbus-jose-jwt from 4.41.1 to 7.9 
(#1682)
[Dec 9, 2019 1:25:10 AM] (ebadger) Revert "YARN-9561. Add C changes for the new 
RuncContainerRuntime.
[Dec 9, 2019 1:25:10 AM] (ebadger) YARN-9561. Add C changes for the new 
RuncContainerRuntime. Contributed




-1 overall


The following subsystems voted -1:
asflicense findbugs pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
 

FindBugs :

   module:hadoop-hdfs-project/hadoop-hdfs 
   org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.reconcile() 
calls Thread.sleep() with a lock held At DirectoryScanner.java:lock held At 
DirectoryScanner.java:[line 441] 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core
 
   Class org.apache.hadoop.applications.mawo.server.common.TaskStatus 
implements Cloneable but does not define or use clone method At 
TaskStatus.java:does not define or use clone method At TaskStatus.java:[lines 
39-346] 
   Equals method for 
org.apache.hadoop.applications.mawo.server.worker.WorkerId assumes the argument 
is of type WorkerId At WorkerId.java:the argument is of type WorkerId At 
WorkerId.java:[line 114] 
   
org.apache.hadoop.applications.mawo.server.worker.WorkerId.equals(Object) does 
not check for null argument At WorkerId.java:null argument At 
WorkerId.java:[lines 114-115] 

FindBugs :

   module:hadoop-cloud-storage-project/hadoop-cos 
   Redundant nullcheck of dir, which is known to be non-null in 
org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at 
BufferPool.java:is known to be non-null in 
org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at 
BufferPool.java:[line 66] 
   org.apache.hadoop.fs.cosn.CosNInputStream$ReadBuffer.getBuffer() may 
expose internal representation by returning CosNInputStream$ReadBuffer.buffer 
At CosNInputStream.java:by returning CosNInputStream$ReadBuffer.buffer At 
CosNInputStream.java:[line 87] 
   Found reliance on default encoding in 
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, 
byte[]):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, 
File, byte[]): new String(byte[]) At CosNativeFileSystemStore.java:[line 199] 
   Found reliance on default encoding in 
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, 
InputStream, byte[], long):in 
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, 
InputStream, byte[], long): new String(byte[]) At 
CosNativeFileSystemStore.java:[line 178] 
   org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.uploadPart(File, 
String, String, int) may fail to clean up java.io.InputStream Obligation to 
clean up resource created at CosNativeFileSystemStore.java:fail to clean up 
java.io.InputStream Obligation to clean up resource created at 
CosNativeFileSystemStore.java:[line 252] is not discharged 

Failed junit tests :

   hadoop.hdfs.server.balancer.TestBalancer 
   hadoop.hdfs.TestDecommissionWithStriped 
   hadoop.hdfs.TestFileChecksumCompositeCrc 
   hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes 
   
hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption 
   
hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorag

[jira] [Created] (HADOOP-16755) Typo in single node cluster setup documentation

2019-12-09 Thread Denes Gerencser (Jira)
Denes Gerencser created HADOOP-16755:


 Summary: Typo in single node cluster setup documentation
 Key: HADOOP-16755
 URL: https://issues.apache.org/jira/browse/HADOOP-16755
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Affects Versions: 3.2.1
Reporter: Denes Gerencser
Assignee: Denes Gerencser


There is a typo in 
https://hadoop.apache.org/docs/r3.2.1/hadoop-project-dist/hadoop-common/SingleCluster.html#Execution
 : The link in "If you want to execute a job on YARN, see YARN on Single Node." 
should point to 
"https://hadoop.apache.org/docs/r3.2.1/hadoop-project-dist/hadoop-common/SingleCluster.html#YARN_on_a_Single_Node";
 instead of 
"https://hadoop.apache.org/docs/r3.2.1/hadoop-project-dist/hadoop-common/SingleCluster.html#YARN_on_Single_Node";
 (note the "_a_" difference).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16754) Fix docker failed to build yetus/hadoop

2019-12-09 Thread kevin su (Jira)
kevin su created HADOOP-16754:
-

 Summary: Fix docker failed to build yetus/hadoop
 Key: HADOOP-16754
 URL: https://issues.apache.org/jira/browse/HADOOP-16754
 Project: Hadoop Common
  Issue Type: Bug
Reporter: kevin su
Assignee: kevin su
 Fix For: 3.3.0


Docker failed to build yetus/hadoop

[https://builds.apache.org/job/hadoop-multibranch/job/PR-1745/1/console]

error message : 
*07:56:02*  Cannot add PPA: 'ppa:~jonathonf/ubuntu/ghc-8.0.2'.*07:56:02*  The 
user named '~jonathonf' has no PPA named 'ubuntu/ghc-8.0.2'*07:56:02*  Please 
choose from the following available PPAs:*07:56:02*   * 'ansible':  
Ansible*07:56:02*   * 'aria2':  aria2*07:56:02*   * 'atslang':  ATS2 
programming language*07:56:02*   * 'backports':  Backport collection
~jonathonf/ubuntu/ghc-8.0.2 not found in jonathonf's PPA, need to change other 
PPA



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16753) Refactor HAAdmin

2019-12-09 Thread Akira Ajisaka (Jira)
Akira Ajisaka created HADOOP-16753:
--

 Summary: Refactor HAAdmin
 Key: HADOOP-16753
 URL: https://issues.apache.org/jira/browse/HADOOP-16753
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Akira Ajisaka


https://issues.apache.org/jira/browse/YARN-9985?focusedCommentId=16991414&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16991414

We should move HDFS-specific haadmin options from HAAdmin to DFSHAAdmin to 
remove unnecessary if-else statements from RMAdmin command.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org