I think I understand a bit better, though now I ask how this date is
different from the release date. Based on the HowToRelease instructions, we
set the release date to when the release vote passes. So, start of release
vote vs. end of release vote doesn't seem that different, and these dates
are s
> Andrew: I bet many would assume it's the release date, like how Ubuntu
releases are numbered.
Good point. Maybe I confuse you because of lack of explanation.
I assume that "branch-cut off timing" mean the timing of freezing branch
like when starting the release vote. It's because that the relea
Thanks for replies Akira and Tsuyoshi, inline:
Akira: Assuming 3.0.0-alpha1 will be released between 2.7.0 and 2.8.0, we
> need to add 3.0.0-alphaX if 2.8.0 is in the fix versions of a jira and we
> don't need to add 3.0.0-alphaX if 2.7.0 is in the fix versions of a jira.
> Is it right?
Yes, cor
Hi Vinod,
Thanks all guys for starting discussion!
My suggestion is adding the date when branch cut is done: like
3.0.0-alpha1-20160724, 2.8.0-20160730 or something.
Pros:-) It's totally ordered. If we have a policy such as backporting
to maintainance branches after the date, users can find that
Thanks Vinod and Andrew for the summary.
> Here's an attempt at encoding this policy as a set of rules for
setting fix
> versions (credit to Allen):
>
> 1) Set the fix version for all a.b.c versions, where c > 0.
> 2) For each major release line, set the lowest a.b.0 version.
Assuming 3.0.0-al
+1 for the source tarball.
- Downloaded source tarball and binary tarball
- Verified signatures and checksums
- Compiled and built a single node cluster
- Compiled Hive 2.1.0/1.2.1 and Tez 0.8.4/0.7.1 using Hadoop 2.7.3 pom
successfully
- Ran some Hive on Tez queries successfully
Thanks,
Akira
+1 (non-binding)
Thanks, Vinod, for all of your hard work and congratulations in completing this
release.
After downloading and building the source, I installed Hadoop 2.7.3 RC0 on a
3-node, multi-tenant, insecure cluster. I ran manual tests to ensure the
following:
- Ensure that user limit perc
+1 (non-binding).
- Built Tar ball from source.
- Deployed pseudo-distributed cluster.
- Ran sleep job.
Thank you Vinod!
Regards,Kuhu Shukla
On Tuesday, July 26, 2016 4:17 PM, Andrew Wang
wrote:
On Tue, Jul 26, 2016 at 1:23 PM, Karthik Kambatla
wrote:
> IIRR, the vote is o
On Tue, Jul 26, 2016 at 1:23 PM, Karthik Kambatla
wrote:
> IIRR, the vote is on source artifacts and binaries are for convenience.
>
> Let me refine this statement a bit. Both the binary tarball and the JARs
we publish to Maven are still official release artifacts. This is why we
need L&Ns for th
Thanks Vinod for forking the thread. Let me try and summarize what Allen
and I talked about in the previous thread.
Currently, we've been marking JIRAs with fix versions of both 2.6.x and
2.7.x. IIUC, the chronological ordering between these two lines is actually
not important. If you're on 2.6.1,
* Verified mds and pgp signatures of both source and binary* Built tarball from
source on OS X 10.11.6 (El Capitan)* Deployed in pseudo-distributed mode* Ran
sleep jobs and other randomly selected tests on both MapReduce and Tez*
Visually verified the RM and history server UIs
Thanks,
Eric
IIRR, the vote is on source artifacts and binaries are for convenience.
If that is right, I am open to either options - do another RC or continue
this vote and fix the binary artifacts.
On Tue, Jul 26, 2016 at 12:11 PM, Vinod Kumar Vavilapalli <
vino...@apache.org> wrote:
> Thanks Daniel and Wei
Tested as follows
- deployed a pseudo cluster from RC0 tar
- Verified signature and checksum
- Run MR sleep on YARN
- With/Without node labels enabled.
- Verified ATS V1 (One small issue though not regression, tracking URL for
running app is shown as Unassigned, will check and raise if required )
But, everyone please do continue your sanity checking on RC0 in case there are
more issues to be fixed.
Thanks
+Vinod
> On Jul 26, 2016, at 12:11 PM, Vinod Kumar Vavilapalli
> wrote:
>
> Thanks Daniel and Wei.
>
> I think these are worth fixing, I’m withdrawing this RC. Will look at fixing
Thanks Daniel and Wei.
I think these are worth fixing, I’m withdrawing this RC. Will look at fixing
these issues and roll a new candidate with the fixes as soon as possible.
Thanks
+Vinod
> On Jul 26, 2016, at 11:05 AM, Wei-Chiu Chuang wrote:
>
> I noticed two issues:
>
> (1) I ran hadoop ch
Forking the thread to make sure it attracts enough eye-balls. The earlier one
was about 3.0.0 specifically and I don’t think enough people were watching that.
I’ll try to summarize a bit.
# Today’s state of release numbering and ordering:
So far, all the releases we have done, we have follow
Hi all,
I'm trying to generate JDiff for sub projects of Hadoop, some updates:
*- Common*: JDiff cannot be generated , filed
https://issues.apache.org/jira/browse/HADOOP-13428 and debugging that.
- *HDFS*: It pointed to a older version (2.6.0), we need to upgrade it to
the latest stable release (p
+1
Thanks
+Vinod
> On Jul 26, 2016, at 7:39 AM, Wangda Tan wrote:
>
> lets try to use both jdiff and the new tool and compare results because this
> is the first time with the new tool.
>
> Appreciate your time to help us about this effort!
I noticed two issues:
(1) I ran hadoop checknative, but it seems the binary tarball was not
compiled with native library for Linux. On the contrary, the Hadoop built
from source tarball with maven -Pnative can find the native libraries on
the same host.
(2) I noticed that the release dates in CHA
I just downloaded the build tarball and deployed it on a 2-node
cluster. It looks to me like it's compiled for the wrong platform:
# file /usr/lib/hadoop/bin/container-executor
/usr/lib/hadoop/bin/container-executor: setuid setgid Mach-O 64-bit
executable
I'm also seeing the no-native-librar
Thanks Vinod for all the release work !
+1 (non-binding).
* Downloaded from source and built it.* Deployed a pseudo distributed cluster.
* Ran some sample jobs: sleep, pi* Ran some dfs commands.* Everything works
fine.
On Friday, July 22, 2016 9:16 PM, Vinod Kumar Vavilapalli
wrote:
H
Thanks for putting this up, Vinod.
+1 (non-binding)
* verified signature and mds of source and binary tarball
* built from source tarball on CentOS 6
* built site documentation
* deployed 3-node cluster with NN-HA and RM-HA, ran example jobs
* built rpms by using Bigtop, deployed 3-node cluster
For more details, see
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/114/
[Jul 25, 2016 1:45:03 PM] (stevel) HADOOP-13406 S3AFileSystem: Consider reusing
filestatus in delete() and
[Jul 25, 2016 2:50:23 PM] (stevel) HADOOP-13188 S3A file-create should throw
error rather than ove
Hi Sean,
Sorry I didn't make it clear, lets try to use both jdiff and the new tool and
compare results because this is the first time with the new tool.
Appreciate your time to help us about this effort!
Thanks,
Wangda
Sent from my iPhone
> On Jul 26, 2016, at 6:01 AM, Sean Busbey wrote:
>
>
+1(non-binding)
Downloaded and built from source
Cluster installed in 3 nodes and verified running simple MR jobs.
Verified for RM HA , RM work preserving restart with CapacityScheduler
Thanks & Regards
Rohith Sharma K S
> On Jul 26, 2016, at 6:50 PM, Vinayakumar B wrote:
>
> +1 (binding)
>
+1 (binding)
1. Downloaded and Built from branch-2.7.3
2. Started up HDFS and YARN in Single Node cluster.
3. Ran WordCount job multiple times and Success.
4. Verified the "Release Notes" available at the URL mentioned by Vinod.
Apart from that,
Faced same issues as Andrew wang, while running t
Just so I don't waste time chasing my tail, should I interpret this
email and the associated JIRA as the PMC preferring I not spend
volunteer time providing a compatibility breakdown as previously
discussed?
On Mon, Jul 25, 2016 at 7:54 PM, Wangda Tan wrote:
> I just filed ticket https://issues.a
Yes, the Java API Compliance Checker allows specifying Annotations to
pare down where incompatible changes happen. It was added some time
ago based on feedback from the Apache HBase project.
The limitations I've found are: 1) at least earlier versions only
supported annotations at the class level
Thank you Vinod.
+1 (non-binding)
- downloaded and built from source
- deployed HDFS-HA cluster and tested few switching behaviors
- executed few hdfs commands from command line
- viewed basic UI
- ran HDFS/Common unit tests
- checked LICENSE and NOTICE files
Regards,
Rakesh
Intel
On Tue, Jul 2
29 matches
Mail list logo