Thanks @Konstantin for reporting this issue, I will post comments on the
JIRA (HADOOP-15205)
- Wangda
On Tue, Apr 10, 2018 at 12:08 PM, Konstantin Shvachko
wrote:
> A note to release managers. As discussed in
> https://issues.apache.org/jira/browse/HADOOP-15205
> We are
A note to release managers. As discussed in
https://issues.apache.org/jira/browse/HADOOP-15205
We are producing release artifacts without sources jars. See e.g.
https://repository.apache.org/content/repositories/releases/org/apache/hadoop/hadoop-common/3.1.0/
I believe this has something to do
Thanks guys for the additional votes! I just sent out announcement email.
Best,
Wangda
On Fri, Apr 6, 2018 at 2:32 AM, 俊平堵 wrote:
> Thanks Wangda for the great work! Sorry for my late coming +1 (binding),
> based on:
>
> - Verified signatures
>
> - Verified checksums for
Thanks Wangda for the great work! Sorry for my late coming +1 (binding),
based on:
- Verified signatures
- Verified checksums for source and binary artifacts
- Built from source
- Deployed a single node cluster
- Verified web UIs, include Namenode, RM, etc.
* Tried shell commands of HDFS and
-
> From: Wangda Tan <wheele...@gmail.com>
> Date: Monday, April 2, 2018 at 9:25 PM
> To: Arpit Agarwal <aagar...@hortonworks.com>
> Cc: Gera Shegalov <ger...@gmail.com>, Sunil G <sun...@apache.org>, "
> y
r with NameNode HA
> * Verified HDFS web UIs
> * Tried out HDFS shell commands
> * Ran sample MapReduce jobs
>
> Thanks!
>
>
> --
> From: Wangda Tan <wheele...@gmail.com>
> Date: Monday, April
-----------
>>> From: Wangda Tan <wheele...@gmail.com>
>>> Date: Monday, April 2, 2018 at 9:25 PM
>>> To: Arpit Agarwal <aagar...@hortonworks.com>
>>> Cc: Gera Shegalov <ger...@gmail.com>, Sun
ilt from source
> > > * Deployed to 3 node secure cluster with NameNode HA
> > > * Verified HDFS web UIs
> > > * Tried out HDFS shell commands
> > > * Ran sample MapReduce jobs
> > >
> > > Thanks!
> > >
> > >
> > &g
;
> > --
> > From: Wangda Tan <wheele...@gmail.com<mailto:wheele...@gmail.com>>
> > Date: Monday, April 2, 2018 at 9:25 PM
> > To: Arpit Agarwal <aagar...@hortonworks.com<mailto:aagarwal@hortonworks
om<mailto:ger...@gmail.com>>, Sunil G <
> sun...@apache.org<mailto:sun...@apache.org>>, "
> yarn-...@hadoop.apache.org<mailto:yarn-...@hadoop.apache.org>" <
> yarn-...@hadoop.apache.org<mailto:yarn-...@hadoop.apache.org>>, Hdfs-dev <
> hdfs-d
;,
"mapreduce-...@hadoop.apache.org<mailto:mapreduce-...@hadoop.apache.org>"
<mapreduce-...@hadoop.apache.org<mailto:mapreduce-...@hadoop.apache.org>>,
Vinod Kumar Vavilapalli <vino...@apache.org<mailto:vino...@apache.org>>
Subject: Re: [VOTE] Re
rpit Agarwal <aagar...@hortonworks.com>
>> Cc: Gera Shegalov <ger...@gmail.com>, Sunil G <sun...@apache.org>, "
>> yarn-...@hadoop.apache.org" <yarn-...@hadoop.apache.org>, Hdfs-dev <
>> hdfs-dev@hadoop.apache.org>, Hadoop Common <common-...@
arn-...@hadoop.apache.org>>, Hdfs-dev
> <hdfs-dev@hadoop.apache.org <mailto:hdfs-dev@hadoop.apache.org>>, Hadoop
> Common <common-...@hadoop.apache.org <mailto:common-...@hadoop.apache.org>>,
> "mapreduce-...@hadoop.apache.org <mailto:mapreduce-...@hadoop
n-...@hadoop.apache.org <mailto:common-...@hadoop.apache.org>>,
> "mapreduce-...@hadoop.apache.org <mailto:mapreduce-...@hadoop.apache.org>"
> <mapreduce-...@hadoop.apache.org <mailto:mapreduce-...@hadoop.apache.org>>,
> Vinod Kumar Vavilapalli
doop.apache.org>, Hdfs-dev <
> hdfs-dev@hadoop.apache.org>, Hadoop Common <common-...@hadoop.apache.org>,
> "mapreduce-...@hadoop.apache.org" <mapreduce-...@hadoop.apache.org>,
> Vinod Kumar Vavilapalli <vino...@apache.org>
> Subject: Re: [VOTE] Re
+1 (binding)
Thanks Wangda for doing the work to produce this release.
I did the following to test the release:
- Built from source
- Installed on 6-node pseudo cluster
- Interacted with RM CLI and GUI
- Tested streaming jobs
- Tested yarn distributed shell jobs
- Tested Max AM Resource Percent
jobs
> >>
> >>Thanks!
> >>
> >>
> >>--
> >>From: Wangda Tan <wheele...@gmail.com>
> >>Date: Monday, April 2, 2018 at 9:25 PM
> >>To: Arpit Agarwal &l
om>
>>Cc: Gera Shegalov <ger...@gmail.com>, Sunil G <sun...@apache.org>,
>>"yarn-...@hadoop.apache.org" <yarn-...@hadoop.apache.org>, Hdfs-dev
>><hdfs-dev@hadoop.apache.org>, Hadoop Common <common-...@hadoop.apache.org>,
>>&qu
Sunil G <sun...@apache.org>,
>"yarn-...@hadoop.apache.org" <yarn-...@hadoop.apache.org>, Hdfs-dev
><hdfs-dev@hadoop.apache.org>, Hadoop Common <common-...@hadoop.apache.org>,
>"mapreduce-...@hadoop.apache.org" <mapreduce-...@hadoop.apache.o
;, Hadoop Common <common-...@hadoop.apache.org>,
"mapreduce-...@hadoop.apache.org" <mapreduce-...@hadoop.apache.org>, Vinod
Kumar Vavilapalli <vino...@apache.org>
Subject: Re: [VOTE] Release Apache Hadoop 3.1.0 (RC1)
As pointed by Arpit, the previously deployed shared jars
d
Kumar Vavilapalli <vino...@apache.org>
Subject: Re: [VOTE] Release Apache Hadoop 3.1.0 (RC1)
As pointed by Arpit, the previously deployed shared jars are incorrect.
Just redeployed jars and staged. @Arpit, could you please check the updated
Maven repo?
https://repository.apache.org/co
+1 (binding)
Thanks for putting this together, Wangda.
I deployed a 7-node hadoop cluster and:
- ran various MR jobs
- ran the same jobs with a mix of opportunistic containers via centralized
scheduling
- ran the jobs with opportunistic containers and distributed scheduling
- ran some jobs with
As pointed by Arpit, the previously deployed shared jars are incorrect.
Just redeployed jars and staged. @Arpit, could you please check the updated
Maven repo?
https://repository.apache.org/content/repositories/orgapachehadoop-1092
Since the jars inside binary tarballs are correct (
Hi Arpit,
Thanks for pointing out this.
I just removed all .md5 files from artifacts. I found md5 checksums still
exist in .mds files and I didn't remove them from .mds file because it is
generated by create-release script and Apache guidance is "should not"
instead of "must not". Please let me
Thanks for putting together this RC, Wangda.
The guidance from Apache is to omit MD5s, specifically:
> SHOULD NOT supply a MD5 checksum file (because MD5 is too broken).
https://www.apache.org/dev/release-distribution#sigs-and-sums
On Apr 2, 2018, at 7:03 AM, Wangda Tan
Hi Gera,
It's my bad, I thought only src/bin tarball is enough.
I just uploaded all other things under artifact/ to
http://people.apache.org/~wangda/hadoop-3.1.0-RC1/
Please let me know if you have any other comments.
Thanks,
Wangda
On Mon, Apr 2, 2018 at 12:50 AM, Gera Shegalov
+1 (binding)
On Mon 2 Apr, 2018, 12:24 Sunil G, wrote:
> Thanks Wangda for initiating the release.
>
> I tested this RC built from source file.
>
>
>- Tested MR apps (sleep, wc) and verified both new YARN UI and old RM
>UI.
>- Below feature sanity is done
>
Thanks, Wangda!
There are many more artifacts in previous votes, e.g., see
http://home.apache.org/~junping_du/hadoop-2.8.3-RC0/ . Among others the
site tarball is missing.
On Sun, Apr 1, 2018 at 11:54 PM Sunil G wrote:
> Thanks Wangda for initiating the release.
>
> I
Thanks Wangda for initiating the release.
I tested this RC built from source file.
- Tested MR apps (sleep, wc) and verified both new YARN UI and old RM UI.
- Below feature sanity is done
- Application priority
- Application timeout
- Intra Queue preemption with priority
Hi Wangda
+1 (non-binding)
- Smoke tests with teragen/terasort/DS jobs
- Various of metrics, UI displays validation, compatible tests
- Tested GPU resource-type
- Verified RM fail-over, app-recovery
- Verified 2 threads async-scheduling
- Enabled placement constraints, tested
Wangda thanks for driving this.
+1(binding)
--Built from source
--Installed HA cluster
--Verified Basic Shell commands
--Ran Sample Jobs
--Browsed the UI's.
On Fri, Mar 30, 2018 at 9:45 AM, Wangda Tan wrote:
> Hi folks,
>
> Thanks to the many who helped with this release
+1 (binding)
* Downloaded source, built from source with -Dhbase.profile=2.0.
* Installed RM HA cluster integrated with ATSv2. Installed HBase-2.0-beta1
for ATSv2 back end. Scheduler is configured with 2 level queue hierarchy.
* Ran sample jobs such as MR/Distributed shell and verified for
**
Thanks Wangda for working on this!
+1 (non-binding)
- Downloaded the binary tarball and verified the checksum.
- Started a pseudo cluster inside one docker container
- Run Resource Manager with Fair Scheduler
- Verified distributed shell
- Verified mapreduce pi job
- Sanity
33 matches
Mail list logo