Apache Hadoop qbt Report: branch2+JDK7 on Linux/x86

2019-11-27 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/518/

[Nov 27, 2019 12:46:38 AM] (xkrogen) HDFS-14973. More strictly enforce 
Balancer/Mover/SPS throttling of




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/empty-configuration.xml
 
   hadoop-tools/hadoop-azure/src/config/checkstyle-suppressions.xml 
   hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/crossdomain.xml 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml
 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-client
 
   Boxed value is unboxed and then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:[line 335] 

Failed junit tests :

   hadoop.util.TestReadWriteDiskValidator 
   hadoop.fs.sftp.TestSFTPFileSystem 
   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure 
   hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys 
   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   hadoop.registry.secure.TestSecureLogins 
   hadoop.yarn.server.timelineservice.security.TestTimelineAuthFilterForV2 
   hadoop.yarn.client.api.impl.TestAMRMClient 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/518/artifact/out/diff-compile-cc-root-jdk1.7.0_95.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/518/artifact/out/diff-compile-javac-root-jdk1.7.0_95.txt
  [328K]

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/518/artifact/out/diff-compile-cc-root-jdk1.8.0_222.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/518/artifact/out/diff-compile-javac-root-jdk1.8.0_222.txt
  [308K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/518/artifact/out/diff-checkstyle-root.txt
  [16M]

   hadolint:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/518/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/518/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/518/artifact/out/diff-patch-pylint.txt
  [24K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/518/artifact/out/diff-patch-shellcheck.txt
  [72K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/518/artifact/out/diff-patch-shelldocs.txt
  [8.0K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/518/artifact/out/whitespace-eol.txt
  [12M]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/518/artifact/out/whitespace-tabs.txt
  [1.3M]

   xml:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/518/artifact/out/xml.txt
  [12K]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/518/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-hbase_hadoop-yarn-server-timelineservice-hbase-client-warnings.html
  [8.0K]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/518/artifact/out/diff-javadoc-javadoc-root-jdk1.7.0_95.txt
  [16K]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/518/artifact/out/diff-javadoc-javadoc-root-jdk1.8.0_222.txt
  [1.1M]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/518/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
  [168K]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/518/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [324K]
   

[jira] [Created] (HADOOP-16731) 编译hadoop3.2.1源码出错

2019-11-27 Thread zhaobaoquan (Jira)
zhaobaoquan created HADOOP-16731:


 Summary: 编译hadoop3.2.1源码出错
 Key: HADOOP-16731
 URL: https://issues.apache.org/jira/browse/HADOOP-16731
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 3.2.1
 Environment: ubuntu16.04 32位

hadoop 3.2.1
Reporter: zhaobaoquan
 Fix For: 3.2.1
 Attachments: lll.log

[WARNING] make[1]: *** [CMakeFiles/nativetask_static.dir/all] Error 2
[WARNING] make: *** [all] Error 2

[ERROR] Failed to execute goal 
org.apache.hadoop:hadoop-maven-plugins:3.2.1:cmake-compile (cmake-compile) on 
project hadoop-mapreduce-client-nativetask: make failed with error code 2 -> 
[Help 1]
org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute goal 
org.apache.hadoop:hadoop-maven-plugins:3.2.1:cmake-compile (cmake-compile) on 
project hadoop-mapreduce-client-nativetask: make failed with error code 2
 at org.apache.maven.lifecycle.internal.MojoExecutor.execute 
(MojoExecutor.java:215)
 at org.apache.maven.lifecycle.internal.MojoExecutor.execute 
(MojoExecutor.java:156)
 at org.apache.maven.lifecycle.internal.MojoExecutor.execute 
(MojoExecutor.java:148)
 at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject 
(LifecycleModuleBuilder.java:117)
 at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject 
(LifecycleModuleBuilder.java:81)
 at 
org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build
 (SingleThreadedBuilder.java:56)
 at org.apache.maven.lifecycle.internal.LifecycleStarter.execute 
(LifecycleStarter.java:128)
 at org.apache.maven.DefaultMaven.doExecute (DefaultMaven.java:305)
 at org.apache.maven.DefaultMaven.doExecute (DefaultMaven.java:192)
 at org.apache.maven.DefaultMaven.execute (DefaultMaven.java:105)
 at org.apache.maven.cli.MavenCli.execute (MavenCli.java:956)
 at org.apache.maven.cli.MavenCli.doMain (MavenCli.java:288)
 at org.apache.maven.cli.MavenCli.main (MavenCli.java:192)
 at sun.reflect.NativeMethodAccessorImpl.invoke0 (Native Method)
 at sun.reflect.NativeMethodAccessorImpl.invoke 
(NativeMethodAccessorImpl.java:62)
 at sun.reflect.DelegatingMethodAccessorImpl.invoke 
(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke (Method.java:498)
 at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced 
(Launcher.java:282)
 at org.codehaus.plexus.classworlds.launcher.Launcher.launch (Launcher.java:225)
 at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode 
(Launcher.java:406)
 at org.codehaus.plexus.classworlds.launcher.Launcher.main (Launcher.java:347)
Caused by: org.apache.maven.plugin.MojoExecutionException: make failed with 
error code 2
 at org.apache.hadoop.maven.plugin.cmakebuilder.CompileMojo.runMake 
(CompileMojo.java:229)
 at org.apache.hadoop.maven.plugin.cmakebuilder.CompileMojo.execute 
(CompileMojo.java:98)
 at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo 
(DefaultBuildPluginManager.java:137)
 at org.apache.maven.lifecycle.internal.MojoExecutor.execute 
(MojoExecutor.java:210)
 at org.apache.maven.lifecycle.internal.MojoExecutor.execute 
(MojoExecutor.java:156)
 at org.apache.maven.lifecycle.internal.MojoExecutor.execute 
(MojoExecutor.java:148)
 at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject 
(LifecycleModuleBuilder.java:117)
 at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject 
(LifecycleModuleBuilder.java:81)
 at 
org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build
 (SingleThreadedBuilder.java:56)
 at org.apache.maven.lifecycle.internal.LifecycleStarter.execute 
(LifecycleStarter.java:128)
 at org.apache.maven.DefaultMaven.doExecute (DefaultMaven.java:305)
 at org.apache.maven.DefaultMaven.doExecute (DefaultMaven.java:192)
 at org.apache.maven.DefaultMaven.execute (DefaultMaven.java:105)
 at org.apache.maven.cli.MavenCli.execute (MavenCli.java:956)
 at org.apache.maven.cli.MavenCli.doMain (MavenCli.java:288)
 at org.apache.maven.cli.MavenCli.main (MavenCli.java:192)
 at sun.reflect.NativeMethodAccessorImpl.invoke0 (Native Method)
 at sun.reflect.NativeMethodAccessorImpl.invoke 
(NativeMethodAccessorImpl.java:62)
 at sun.reflect.DelegatingMethodAccessorImpl.invoke 
(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke (Method.java:498)
 at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced 
(Launcher.java:282)
 at org.codehaus.plexus.classworlds.launcher.Launcher.launch (Launcher.java:225)
 at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode 
(Launcher.java:406)
 at org.codehaus.plexus.classworlds.launcher.Launcher.main (Launcher.java:347)
[ERROR] 
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 

[jira] [Resolved] (HADOOP-16455) ABFS: Implement FileSystem.access() method

2019-11-27 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16455?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-16455.
-
Fix Version/s: 3.3.0
   Resolution: Fixed

> ABFS: Implement FileSystem.access() method
> --
>
> Key: HADOOP-16455
> URL: https://issues.apache.org/jira/browse/HADOOP-16455
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Reporter: Bilahari T H
>Assignee: Bilahari T H
>Priority: Minor
> Fix For: 3.3.0
>
>
> Implement the access method



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2019-11-27 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1333/

[Nov 26, 2019 12:41:41 PM] (snemeth) YARN-9937. addendum: Add missing queue 
configs in
[Nov 26, 2019 3:36:19 PM] (github) HADOOP-16709. S3Guard: Make authoritative 
mode exclusive for metadata -
[Nov 26, 2019 3:42:59 PM] (snemeth) YARN-9444. YARN API ResourceUtils's 
getRequestedResourcesFromConfig
[Nov 26, 2019 7:11:26 PM] (weichiu) HADOOP-16685: FileSystem#listStatusIterator 
does not check if given path
[Nov 26, 2019 8:22:35 PM] (snemeth) YARN-9899. Migration tool that help to 
generate CS config based on FS
[Nov 26, 2019 8:29:12 PM] (prabhujoseph) YARN-9991. Fix Application Tag prefix 
to userid. Contributed by Szilard
[Nov 26, 2019 8:45:12 PM] (snemeth) YARN-9362. Code cleanup in 
TestNMLeveldbStateStoreService. Contributed
[Nov 26, 2019 9:04:07 PM] (snemeth) YARN-9290. Invalid SchedulingRequest not 
rejected in Scheduler




-1 overall


The following subsystems voted -1:
asflicense findbugs pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core
 
   Class org.apache.hadoop.applications.mawo.server.common.TaskStatus 
implements Cloneable but does not define or use clone method At 
TaskStatus.java:does not define or use clone method At TaskStatus.java:[lines 
39-346] 
   Equals method for 
org.apache.hadoop.applications.mawo.server.worker.WorkerId assumes the argument 
is of type WorkerId At WorkerId.java:the argument is of type WorkerId At 
WorkerId.java:[line 114] 
   
org.apache.hadoop.applications.mawo.server.worker.WorkerId.equals(Object) does 
not check for null argument At WorkerId.java:null argument At 
WorkerId.java:[lines 114-115] 

FindBugs :

   module:hadoop-cloud-storage-project/hadoop-cos 
   Redundant nullcheck of dir, which is known to be non-null in 
org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at 
BufferPool.java:is known to be non-null in 
org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at 
BufferPool.java:[line 66] 
   org.apache.hadoop.fs.cosn.CosNInputStream$ReadBuffer.getBuffer() may 
expose internal representation by returning CosNInputStream$ReadBuffer.buffer 
At CosNInputStream.java:by returning CosNInputStream$ReadBuffer.buffer At 
CosNInputStream.java:[line 87] 
   Found reliance on default encoding in 
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, 
byte[]):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, 
File, byte[]): new String(byte[]) At CosNativeFileSystemStore.java:[line 199] 
   Found reliance on default encoding in 
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, 
InputStream, byte[], long):in 
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, 
InputStream, byte[], long): new String(byte[]) At 
CosNativeFileSystemStore.java:[line 178] 
   org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.uploadPart(File, 
String, String, int) may fail to clean up java.io.InputStream Obligation to 
clean up resource created at CosNativeFileSystemStore.java:fail to clean up 
java.io.InputStream Obligation to clean up resource created at 
CosNativeFileSystemStore.java:[line 252] is not discharged 

Failed junit tests :

   hadoop.hdfs.server.balancer.TestBalancer 
   hadoop.hdfs.server.namenode.TestNamenodeCapacityReport 
   hadoop.hdfs.server.namenode.TestRedudantBlocks 
   hadoop.hdfs.tools.TestDFSZKFailoverController 
   hadoop.hdfs.server.federation.router.TestRouterFaultTolerant 
   hadoop.yarn.server.webproxy.amfilter.TestAmFilter 
   

Re: Some updates for Hadoop on ARM and next steps

2019-11-27 Thread Chris Thistlethwaite
If anyone would like to follow along in JIRA, here's the ticket 
https://issues.apache.org/jira/browse/INFRA-19369. I've been updating 
that ticket with any issues. arm-poc has been moved to a node in 
Singapore and will need to be tested again with builds.


I'm going to mention again that someone from Hadoop should be changing 
these builds in order to run against arm-poc. In my reply below, I 
thought that the project knew about the ARM nodes and was involved with 
setting up new builds, which is why I said I'd be willing to make simple 
changes for testing. However I don't want to change things without the 
knowledge of the project. The builds themselves are created by the 
project, not Infra, which means I have no idea which build should run 
against ARM vs any other CPU.


-Chris T.
#asfinfra

On 11/22/19 9:28 AM, Chris Thistlethwaite wrote:
In order to run builds against arm-poc, someone (me included) will 
need to change a build config to only use that label. The node itself 
isn't fully built out like our other ASF nodes, due to the fact that 
it's ARM and we don't have all the packaged tools built for that 
architecture, it will likely take some time to fix issues.



-Chris T.
#asfinfra

On 11/22/19 3:46 AM, bo zhaobo wrote:
Thanks. That would be great if a project can use the ARM test worker 
to do the specific testing on ARM.


Also I think it's better to make @Chris Thistlethwaite 
 know this email.  Could you please give 
some kind advices? Thank you.


BR

ZhaoBo



Mailtrack 
 
	Sender notified by
Mailtrack 
 
19/11/22 下午04:42:30 	



Zhenyu Zheng > 于2019年11月22日周五 下午4:32写道:


Hi Hadoop,

First off, I want to thanks to Wei-Chiu for having me on the last
week's Hadoop community sync to introduce our ideas of ARM
support on Hadoop. And also for all the attendees for listening
and providing suggestions.

I want to provide some update on the status:

1. Our teammate has successfully donated an ARM machine to the
ApacheInfra team, and it is setup for running:
https://builds.apache.org/computer/arm-poc/it might be a good
idea to make use of it, like running some periodic jobs for some
experiment, and it will also benifit us for discussions and
asking for help on identified problems.

2. I've been keep try to test and debug sub-project by
sub-project, and here is the current status for YARN:

When running the whole test suits, some of the test suit will be
skipped due to the rules of if some previous test fails, then
skip this suit. So I manually run those test suits again to see
if they can pass, the full test result is that:

Total: 5688; Failure: 0; Error 15; Skipped 60

Among the 15 errors, 13 of them came from the ``Apache Hadoop
YARN TimelineService HBase tests`` test suit. The other 2 came
from ``Apache Hadoop YARN DistributedShell`` suit.

3. Some walk-arounds:

1) The only walk-arounds for build Hadoop on ARM is to pre-build
grpc-java, which my teammates are working with the community to
release a newer version with ARM support:
github.com/grpc/grpc-java/issues/6364


2) For YARN tests, the TimelineService HBase suit need either
HBase 1.4.8 or 2.0.2 which can only be built under protocbuf
2.5.0(HBase 1.4.8, HBase 2.0.2 external) and protocbuf
3.5.1(HBase 2.0.2 internal), so we have to pre-build them. And
the new cause of the error is still under debugging.

3) The rest of the know issue and possible walk-arounds are
reported to Hadoop Jira and are now under Wei-Chiu's tent jira
report: https://issues.apache.org/jira/browse/HADOOP-16723

I have put all the test logs in the attachment and error related
surefire reports in my github
https://github.com/ZhengZhenyu/HadoopTestLogs/issues/1(the
attachment size is limited for sending mailling list), please
have a check if you are interested.

So, how should we move a little bit forward and make use of the
ARM resources in ApacheInfra?

Best Regards,

Zhenyu







Re: [DISCUSS] Making 2.10 the last minor 2.x release

2019-11-27 Thread Konstantin Shvachko
Hey guys,

I think we diverged a bit from the initial topic of this discussion, which
is removing branch-2.10, and changing the version of branch-2 from
2.11.0-SNAPSHOT to 2.10.1-SNAPSHOT.
Sounds like the subject line for this thread "Making 2.10 the last minor
2.x release" confused people.
It is in fact a wider matter that can be discussed when somebody actually
proposes to release 2.11, which I understand nobody does at the moment.

So if anybody objects removing branch-2.10 please make an argument.
Otherwise we should go ahead and just do it next week.
I see people still struggling to keep branch-2 and branch-2.10 in sync.

Thanks,
--Konstantin

On Thu, Nov 21, 2019 at 3:49 PM Jonathan Hung  wrote:

> Thanks for the detailed thoughts, everyone.
>
> Eric (Badger), my understanding is the same as yours re. minor vs patch
> releases. As for putting features into minor/patch releases, if we keep the
> convention of putting new features only into minor releases, my assumption
> is still that it's unlikely people will want to get them into branch-2
> (based on the 2.10.0 release process). For the java 11 issue, we haven't
> even really removed support for java 7 in branch-2 (much less java 8), so I
> feel moving to java 11 would go along with a move to branch 3. And as you
> mentioned, if people really want to use java 11 on branch-2, we can always
> revive branch-2. But for now I think the convenience of not needing to port
> to both branch-2 and branch-2.10 (and below) outweighs the cost of
> potentially needing to revive branch-2.
>
> Jonathan Hung
>
>
> On Wed, Nov 20, 2019 at 10:50 AM Eric Yang  wrote:
>
>> +1 for 2.10.x as last release for 2.x version.
>>
>> Software would become more compatible when more companies stress test the
>> same software and making improvements in trunk.  Some may be extra caution
>> on moving up the version because obligation internally to keep things
>> running.  Company obligation should not be the driving force to maintain
>> Hadoop branches.  There is no proper collaboration in the community when
>> every name brand company maintains its own Hadoop 2.x version.  I think it
>> would be more healthy for the community to reduce the branch forking and
>> spend energy on trunk to harden the software.  This will give more
>> confidence to move up the version than trying to fix n permutations
>> breakage like Flash fixing the timeline.
>>
>> Apache license stated, there is no warranty of any kind for code
>> contributions.  Fewer community release process should improve software
>> quality when eyes are on trunk, and help steering toward the same end goals.
>>
>> regards,
>> Eric
>>
>>
>>
>> On Tue, Nov 19, 2019 at 3:03 PM Eric Badger
>>  wrote:
>>
>>> Hello all,
>>>
>>> Is it written anywhere what the difference is between a minor release
>>> and a
>>> point/dot/maintenance (I'll use "point" from here on out) release? I have
>>> looked around and I can't find anything other than some compatibility
>>> documentation in 2.x that has since been removed in 3.x [1] [2]. I think
>>> this would help shape my opinion on whether or not to keep branch-2
>>> alive.
>>> My current understanding is that we can't really break compatibility in
>>> either a minor or point release. But the only mention of the difference
>>> between minor and point releases is how to deal with Stable, Evolving,
>>> and
>>> Unstable tags, and how to deal with changing default configuration
>>> values.
>>> So it seems like there really isn't a big official difference between the
>>> two. In my mind, the functional difference between the two is that the
>>> minor releases may have added features and rewrites, while the point
>>> releases only have bug fixes. This might be an incorrect understanding,
>>> but
>>> that's what I have gathered from watching the releases over the last few
>>> years. Whether or not this is a correct understanding, I think that this
>>> needs to be documented somewhere, even if it is just a convention.
>>>
>>> Given my assumed understanding of minor vs point releases, here are the
>>> pros/cons that I can think of for having a branch-2. Please add on or
>>> correct me for anything you feel is missing or inadequate.
>>> Pros:
>>> - Features/rewrites/higher-risk patches are less likely to be put into
>>> 2.10.x
>>> - It is less necessary to move to 3.x
>>>
>>> Cons:
>>> - Bug fixes are less likely to be put into 2.10.x
>>> - An extra branch to maintain
>>>   - Committers have an extra branch (5 vs 4 total branches) to commit
>>> patches to if they should go all the way back to 2.10.x
>>> - It is less necessary to move to 3.x
>>>
>>> So on the one hand you get added stability in fewer features being
>>> committed to 2.10.x, but then on the other you get fewer bug fixes being
>>> committed. In a perfect world, we wouldn't have to make this tradeoff.
>>> But
>>> we don't live in a perfect world and committers will make mistakes either
>>> because of lack of knowledge or simply because they 

Re: Some updates for Hadoop on ARM and next steps

2019-11-27 Thread Zhenyu Zheng
Thanks for the reply Chris, And really appriaciated about all the things
you have done to made our node work. I'm sending this ML to send out info
about the node is ready. And hope someone from Hadoop project could help us
set some new jobs/builds, I totally understand your role and opinion, I'm
not asking you to add jobs for Hadoop, I'm just trying to make clear about
what we are looking for.

As Chris mentioned in previous email interactions, there are 3 kinds of CI
nodes available in the CI system, the 1st and 2nd type have to use the
current infra management tools to install tools and software required for
the system, which the infra management tool is currently not ready for ARM
platform. And the 3rd kind of CI nodes is what we are ready now - we
manually install all the required tools and software and maintain them
according to infra's other nodes.  And we will try to make the infra
management tools usable for ARM platform to make the nodes type2 or type1.

As for jobs/builds, seems a periodic job/builds like
https://builds.apache.org/view/H-L/view/Hadoop/job/Hadoop-trunk-Commit/ seems
to be the most suitable for what we are looking for the current step. Since
we are still having some errors and failures(15 erros in Hadoop-YARN, 4
Failures and 2 Errors in Hadoop-HDFS, 23 Failures in Hadoop-MapReduce,
which is a quite small number comparing to the total number of tests, and
failures/errors in same sub-project seems to be caused by same problem)
that our team will work on, so we want to propose 4 different jobs similar
to the mechanism used in Hadoop-trunk-Commit, a SCM triggered periodic job
test out building and UT for each sub-project:
Hadoop-YARN-trunk-Commit-Aarch64, Hadoop-HDFS-trunk-Commit-Aarch64,
Hadoop-MapReducer-trunk-Commit-Aarch64 and Hadoop-Common-trunk-Commit
Aarch64 to be more tracked for each project. We can also start one by one,
of cause.

Hope this could clear all the misunderstanding.

BR,

On Wed, Nov 27, 2019 at 10:28 PM Chris Thistlethwaite 
wrote:

> If anyone would like to follow along in JIRA, here's the ticket
> https://issues.apache.org/jira/browse/INFRA-19369. I've been updating
> that ticket with any issues. arm-poc has been moved to a node in Singapore
> and will need to be tested again with builds.
>
> I'm going to mention again that someone from Hadoop should be changing
> these builds in order to run against arm-poc. In my reply below, I thought
> that the project knew about the ARM nodes and was involved with setting up
> new builds, which is why I said I'd be willing to make simple changes for
> testing. However I don't want to change things without the knowledge of the
> project. The builds themselves are created by the project, not Infra, which
> means I have no idea which build should run against ARM vs any other CPU.
>
> -Chris T.
> #asfinfra
>
> On 11/22/19 9:28 AM, Chris Thistlethwaite wrote:
>
> In order to run builds against arm-poc, someone (me included) will need to
> change a build config to only use that label. The node itself isn't fully
> built out like our other ASF nodes, due to the fact that it's ARM and we
> don't have all the packaged tools built for that architecture, it will
> likely take some time to fix issues.
>
>
> -Chris T.
> #asfinfra
>
> On 11/22/19 3:46 AM, bo zhaobo wrote:
>
> Thanks. That would be great if a project can use the ARM test worker to do
> the specific testing on ARM.
>
> Also I think it's better to make @Chris Thistlethwaite  
> know
> this email.  Could you please give some kind advices? Thank you.
>
> BR
>
> ZhaoBo
>
>
>
> [image: Mailtrack]
> 
>  Sender
> notified by
> Mailtrack
> 
>  19/11/22
> 下午04:42:30
>
> Zhenyu Zheng  于2019年11月22日周五 下午4:32写道:
>
>> Hi Hadoop,
>>
>>
>>
>> First off, I want to thanks to Wei-Chiu for having me on the last week's
>> Hadoop community sync to introduce our ideas of ARM support on Hadoop. And
>> also for all the attendees for listening and providing suggestions.
>>
>>
>>
>> I want to provide some update on the status:
>>
>> 1. Our teammate has successfully donated an ARM machine to the
>> ApacheInfra team, and it is setup for running:
>> https://builds.apache.org/computer/arm-poc/ it might be a good idea to
>> make use of it, like running some periodic jobs for some experiment, and it
>> will also benifit us for discussions and asking for help on identified
>> problems.
>>
>>
>>
>> 2. I've been keep try to test and debug sub-project by sub-project, and
>> here is the current status for YARN:
>>
>> When running the whole test suits, some of the test suit will be skipped
>> due to the rules of if some previous test fails, then skip this suit. So I
>> manually run those test suits again to see if they can pass, the full test
>> result is that:
>>
>> Total: 5688; Failure: 0; Error 15; Skipped 60
>>
>> Among the 15 errors, 13