+1 (binding) on the source tarball, would be nice to redo the binary
tarball.

- Stood up a pseudo-dist cluster, and ran some HDFS and MR jobs.
- The binary is about 40 MB larger than the previous release; it appears
the docs are copied over twice - share/doc/hadoop and share/hadoop. The
binary tarball from the release build should have everything in the right
place, I can help with this if needed.
- index.html (under share/doc/hadoop) is stale, and could be updated. If we
ship the bits as is, we should at least put the new features on the
website. Also, highlight the alpha features (like YARN reservations) as
alpha.

Thanks for putting this together, Arun.

Karthik

On Tue, Nov 18, 2014 at 11:07 AM, Jian He <j...@hortonworks.com> wrote:

> +1,
>
> Built from source.
> Deployed a single node cluster.
> Ran sample MapReduce jobs while restarting RM successfully.
>
> Jian
>
> On Tue, Nov 18, 2014 at 8:47 AM, Eric Payne <erichadoo...@yahoo.com.invalid
> >
> wrote:
>
> > +1 . Thanks Arun, for producing this release. I downloaded and built
> > the source. I started local cluster and ran wordcount, sleep, and
> streaming
> > jobs.
> >
> >  - I ran a distributed shell job which tested preserving containers
> across
> > AM restart by setting the -keep_containers_across_application_attempts
> flag
> > and killing the first AM once the containers start. I checked the results
> > by looking in the timeline server and comparing the start times of the
> > non-AM containers against the start times of the later AM container. -
> > I enabled the preemption feature and verified containers were preempted
> and
> > queues were adjusted to guaranteed levels.
> > - I ran unit tests for hadoop-yarn-server-resourcemanage. All passed with
> > the exception of TestContainerResourceUsage. - I ran unit tests for
> > hadoop-hdfs. All passed with the exception of
> > TestBPOfferService#testBasicFunctionality (HDFS-3930) Thank you,-Eric
> Payne
> >
> >
> >
> >
> >       From: Arun C Murthy <a...@hortonworks.com>
> >  To: "common-...@hadoop.apache.org" <common-...@hadoop.apache.org>; "
> > hdfs-dev@hadoop.apache.org" <hdfs-dev@hadoop.apache.org>; "
> > yarn-...@hadoop.apache.org" <yarn-...@hadoop.apache.org>; "
> > mapreduce-...@hadoop.apache.org" <mapreduce-...@hadoop.apache.org>
> >  Sent: Thursday, November 13, 2014 5:08 PM
> >  Subject: [VOTE] Release Apache Hadoop 2.6.0
> >
> > Folks,
> >
> > I've created another release candidate (rc1) for hadoop-2.6.0 based on
> the
> > feedback.
> >
> > The RC is available at:
> > http://people.apache.org/~acmurthy/hadoop-2.6.0-rc1
> > The RC tag in git is: release-2.6.0-rc1
> >
> > The maven artifacts are available via repository.apache.org at
> > https://repository.apache.org/content/repositories/orgapachehadoop-1013.
> >
> > Please try the release and vote; the vote will run for the usual 5 days.
> >
> > thanks,
> > Arun
> >
> >
> > --
> > CONFIDENTIALITY NOTICE
> > NOTICE: This message is intended for the use of the individual or entity
> to
> > which it is addressed and may contain information that is confidential,
> > privileged and exempt from disclosure under applicable law. If the reader
> > of this message is not the intended recipient, you are hereby notified
> that
> > any printing, copying, dissemination, distribution, disclosure or
> > forwarding of this communication is strictly prohibited. If you have
> > received this communication in error, please contact the sender
> immediately
> > and delete it from your system. Thank You.
> >
> >
> >
> >
>
> --
> CONFIDENTIALITY NOTICE
> NOTICE: This message is intended for the use of the individual or entity to
> which it is addressed and may contain information that is confidential,
> privileged and exempt from disclosure under applicable law. If the reader
> of this message is not the intended recipient, you are hereby notified that
> any printing, copying, dissemination, distribution, disclosure or
> forwarding of this communication is strictly prohibited. If you have
> received this communication in error, please contact the sender immediately
> and delete it from your system. Thank You.
>

Reply via email to