Thanks Vinod and Andrew for the summary.
> Here's an attempt at encoding this policy as a set of rules for
setting fix
> versions (credit to Allen):
>
> 1) Set the fix version for all a.b.c versions, where c > 0.
> 2) For each major release line, set the lowest a.b.0 version.
Assuming
+1 for the source tarball.
- Downloaded source tarball and binary tarball
- Verified signatures and checksums
- Compiled and built a single node cluster
- Compiled Hive 2.1.0/1.2.1 and Tez 0.8.4/0.7.1 using Hadoop 2.7.3 pom
successfully
- Ran some Hive on Tez queries successfully
Thanks,
+1 (non-binding)
Thanks, Vinod, for all of your hard work and congratulations in completing this
release.
After downloading and building the source, I installed Hadoop 2.7.3 RC0 on a
3-node, multi-tenant, insecure cluster. I ran manual tests to ensure the
following:
- Ensure that user limit
On Tue, Jul 26, 2016 at 1:23 PM, Karthik Kambatla
wrote:
> IIRR, the vote is on source artifacts and binaries are for convenience.
>
> Let me refine this statement a bit. Both the binary tarball and the JARs
we publish to Maven are still official release artifacts. This is
* Verified mds and pgp signatures of both source and binary* Built tarball from
source on OS X 10.11.6 (El Capitan)* Deployed in pseudo-distributed mode* Ran
sleep jobs and other randomly selected tests on both MapReduce and Tez*
Visually verified the RM and history server UIs
Thanks,
Eric
IIRR, the vote is on source artifacts and binaries are for convenience.
If that is right, I am open to either options - do another RC or continue
this vote and fix the binary artifacts.
On Tue, Jul 26, 2016 at 12:11 PM, Vinod Kumar Vavilapalli <
vino...@apache.org> wrote:
> Thanks Daniel and
Tested as follows
- deployed a pseudo cluster from RC0 tar
- Verified signature and checksum
- Run MR sleep on YARN
- With/Without node labels enabled.
- Verified ATS V1 (One small issue though not regression, tracking URL for
running app is shown as Unassigned, will check and raise if required )
But, everyone please do continue your sanity checking on RC0 in case there are
more issues to be fixed.
Thanks
+Vinod
> On Jul 26, 2016, at 12:11 PM, Vinod Kumar Vavilapalli
> wrote:
>
> Thanks Daniel and Wei.
>
> I think these are worth fixing, I’m withdrawing this RC.
Thanks Daniel and Wei.
I think these are worth fixing, I’m withdrawing this RC. Will look at fixing
these issues and roll a new candidate with the fixes as soon as possible.
Thanks
+Vinod
> On Jul 26, 2016, at 11:05 AM, Wei-Chiu Chuang wrote:
>
> I noticed two issues:
>
Forking the thread to make sure it attracts enough eye-balls. The earlier one
was about 3.0.0 specifically and I don’t think enough people were watching that.
I’ll try to summarize a bit.
# Today’s state of release numbering and ordering:
So far, all the releases we have done, we have
Hi all,
I'm trying to generate JDiff for sub projects of Hadoop, some updates:
*- Common*: JDiff cannot be generated , filed
https://issues.apache.org/jira/browse/HADOOP-13428 and debugging that.
- *HDFS*: It pointed to a older version (2.6.0), we need to upgrade it to
the latest stable release
+1
Thanks
+Vinod
> On Jul 26, 2016, at 7:39 AM, Wangda Tan wrote:
>
> lets try to use both jdiff and the new tool and compare results because this
> is the first time with the new tool.
>
> Appreciate your time to help us about this effort!
I noticed two issues:
(1) I ran hadoop checknative, but it seems the binary tarball was not
compiled with native library for Linux. On the contrary, the Hadoop built
from source tarball with maven -Pnative can find the native libraries on
the same host.
(2) I noticed that the release dates in
I just downloaded the build tarball and deployed it on a 2-node
cluster. It looks to me like it's compiled for the wrong platform:
# file /usr/lib/hadoop/bin/container-executor
/usr/lib/hadoop/bin/container-executor: setuid setgid Mach-O 64-bit
executable
I'm also seeing the
Thanks Vinod for all the release work !
+1 (non-binding).
* Downloaded from source and built it.* Deployed a pseudo distributed cluster.
* Ran some sample jobs: sleep, pi* Ran some dfs commands.* Everything works
fine.
On Friday, July 22, 2016 9:16 PM, Vinod Kumar Vavilapalli
Thanks for putting this up, Vinod.
+1 (non-binding)
* verified signature and mds of source and binary tarball
* built from source tarball on CentOS 6
* built site documentation
* deployed 3-node cluster with NN-HA and RM-HA, ran example jobs
* built rpms by using Bigtop, deployed 3-node cluster
For more details, see
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/114/
[Jul 25, 2016 1:45:03 PM] (stevel) HADOOP-13406 S3AFileSystem: Consider reusing
filestatus in delete() and
[Jul 25, 2016 2:50:23 PM] (stevel) HADOOP-13188 S3A file-create should throw
error rather than
+1(non-binding)
Downloaded and built from source
Cluster installed in 3 nodes and verified running simple MR jobs.
Verified for RM HA , RM work preserving restart with CapacityScheduler
Thanks & Regards
Rohith Sharma K S
> On Jul 26, 2016, at 6:50 PM, Vinayakumar B
+1 (binding)
1. Downloaded and Built from branch-2.7.3
2. Started up HDFS and YARN in Single Node cluster.
3. Ran WordCount job multiple times and Success.
4. Verified the "Release Notes" available at the URL mentioned by Vinod.
Apart from that,
Faced same issues as Andrew wang, while running
Just so I don't waste time chasing my tail, should I interpret this
email and the associated JIRA as the PMC preferring I not spend
volunteer time providing a compatibility breakdown as previously
discussed?
On Mon, Jul 25, 2016 at 7:54 PM, Wangda Tan wrote:
> I just filed
Yes, the Java API Compliance Checker allows specifying Annotations to
pare down where incompatible changes happen. It was added some time
ago based on feedback from the Apache HBase project.
The limitations I've found are: 1) at least earlier versions only
supported annotations at the class level
Thank you Vinod.
+1 (non-binding)
- downloaded and built from source
- deployed HDFS-HA cluster and tested few switching behaviors
- executed few hdfs commands from command line
- viewed basic UI
- ran HDFS/Common unit tests
- checked LICENSE and NOTICE files
Regards,
Rakesh
Intel
On Tue, Jul
Thanks Vinod.
+1 (non-binding)
* Downloaded and built from source
* Checked LICENSE and NOTICE
* Deployed a pseudo cluster
* Ran through MR and HDFS tests
* verified basic HDFS operations and Pi job.
Zhihai
On Fri, Jul 22, 2016 at 7:15 PM, Vinod Kumar Vavilapalli wrote:
23 matches
Mail list logo