FYI that we got our last blocker in today, so I'm currently rolling RC1.
Stay tuned!
On Thu, Nov 30, 2017 at 8:32 AM, Allen Wittenauer
wrote:
>
> > On Nov 30, 2017, at 1:07 AM, Rohith Sharma K S <
> rohithsharm...@apache.org> wrote:
> >
> >
> > >. If ATSv1 isn’t
> On Nov 21, 2017, at 2:16 PM, Vinod Kumar Vavilapalli
> wrote:
>
>>> - $HADOOP_YARN_HOME/sbin/yarn-daemon.sh start historyserver doesn't even
>>> work. Not just deprecated in favor of timelineserver as was advertised.
>>
>> This works for me in trunk and the bash
bq. (a) The behavior of this command. Clearly, it will conflict with the
MapReduce JHS - only one of them can be started on the same node.
Yes,Since the PID file will be same.
This problem will not present in branch-2.9 + as pid file will be suffix
with mapred(for JHS)
Hi folks,
Thanks again for the testing help with the RC. Here's our dashboard for the
3.0.0 release:
https://issues.apache.org/jira/secure/Dashboard.jspa?selectPageId=12329849
Right now we're tracking three blockers:
* HADOOP-15058 is the create-release fix, I just put up a patch which needs
Hi Vinod,
bq. (b) We need to figure out if this V1 TimelineService should even be
support given ATSv2.
Yes, I am following this discussion. Let me chat with Rohith and Varun
about this and we will respond on this thread. As such, my preliminary
thoughts are that we should continue to support
>> - $HADOOP_YARN_HOME/sbin/yarn-daemon.sh start historyserver doesn't even
>> work. Not just deprecated in favor of timelineserver as was advertised.
>
> This works for me in trunk and the bash code doesn’t appear to have
> changed in a very long time. Probably something local to your
he.org; hdfs-...@hadoop.apache.org
Subject: Re: [VOTE] Release Apache Hadoop 3.0.0 RC0
>>> - Cannot enable new UI in YARN because it is under a non-default
>>> compilation flag. It should be on by default.
>>>
>>
>> The yarn-ui profile has always been off
>>> - Cannot enable new UI in YARN because it is under a non-default
>>> compilation flag. It should be on by default.
>>>
>>
>> The yarn-ui profile has always been off by default, AFAIK. It's documented
>> to turn it on in BUILDING.txt for release builds, and we do it in
>> create-release.
>>
>> - One decommissioned node in YARN ResourceManager UI always appears to
>> start with, even when there are no NodeManagers that are started yet:
>> Info :-1, DECOMMISSIONED, null rack. It shows up only in the UI though,
>> not in the CLI node -list
>>
>
> Is this a blocker? Could we get a
The original release script and instructions broke the build up into
three or so steps. When I rewrote it, I kept that same model. It’s probably
time to re-think that. In particular, it should probably be one big step that
even does the maven deploy. There’s really no harm in doing
On Mon, Nov 20, 2017 at 9:59 PM, Sangjin Lee wrote:
>
> On Mon, Nov 20, 2017 at 9:46 PM, Andrew Wang
> wrote:
>
>> Thanks for the spot Sangjin. I think this bug introduced in
>> create-release by HADOOP-14835. The multi-pass maven build generates
Thanks for the thorough review Vinod, some inline responses:
*Issues found during testing*
>
> Major
> - The previously supported way of being able to use different tar-balls
> for different sub-modules is completely broken - common and HDFS tar.gz are
> completely empty.
>
Is this something
On Mon, Nov 20, 2017 at 9:46 PM, Andrew Wang
wrote:
> Thanks for the spot Sangjin. I think this bug introduced in create-release
> by HADOOP-14835. The multi-pass maven build generates these dummy client
> jars during the site build since skipShade is specified.
>
>
Thanks for the spot Sangjin. I think this bug introduced in create-release
by HADOOP-14835. The multi-pass maven build generates these dummy client
jars during the site build since skipShade is specified.
This might be enough to cancel the RC. Thoughts?
Best,
Andrew
On Mon, Nov 20, 2017 at 7:51
> - When did we stop putting CHANGES files into the source artifacts?
CHANGES files were removed by https://issues.apache.org/jira/browse/HADOOP-11792
> - Even after "mvn install"ing once, shading is repeated again and again for
every new 'mvn install' even though there are no source
I checked the client jars that are supposed to contain shaded dependencies,
and they don't look quite right:
$ tar -tzvf hadoop-3.0.0.tar.gz | grep hadoop-client-api-3.0.0.jar
-rw-r--r-- 0 andrew andrew44531 Nov 14 11:53
hadoop-3.0.0/share/hadoop/client/hadoop-client-api-3.0.0.jar
$ tar
[VOTE] Release Apache Hadoop 3.0.0 RC0
+1 (binding)
- Verified checksums of all tarballs
- Built source with native, Java 1.8.0_131-b11 on Mac OS X 10.12.6
- Passed all S3A and ADL integration tests
- Deployed both binary and built source to a pseudo cluster, passed the
following
On Mon, Nov 20, 2017 at 5:26 PM, Vinod Kumar Vavilapalli wrote:
> Thanks for all the push, Andrew!
>
> Looking at the RC. Went through my usual check-list. Here's my summary.
> Will cast my final vote after comparing and validating my findings with
> others.
>
> Verification
Thanks for all the push, Andrew!
Looking at the RC. Went through my usual check-list. Here's my summary. Will
cast my final vote after comparing and validating my findings with others.
Verification
- [Check] Successful recompilation from source tar-ball
- [Check] Signature verification
-
Compilation passed for me. Using jdk1.8.0_40.jdk.
+Vinod
> On Nov 20, 2017, at 4:16 PM, Wei-Chiu Chuang wrote:
>
> @vinod
> I followed your command but I could not reproduce your problem.
>
> [weichiu@storage-1 hadoop-3.0.0-src]$ ls -al hadoop-common-project/hadoop-c
>
@vinod
I followed your command but I could not reproduce your problem.
[weichiu@storage-1 hadoop-3.0.0-src]$ ls -al hadoop-common-project/hadoop-c
ommon/target/hadoop-common-3.0.0.tar.gz
-rw-rw-r-- 1 weichiu weichiu 37052439 Nov 20 21:59
+1 (binding)
Thanks Andrew!
- Verified md5 and built from source
- Started a pseudo distributed cluster with KMS,
- Performed basic hdfs operations plus encryption related operations
- Verified logs and webui
- Confidence from CDH testings (will let Andrew answer officially, but
Thanks, Andrew!
+1 (non-binding)
- Verified checksums and signatures
- Deployed a single node cluster on CentOS 7.4 using the binary and source
release
- Ran hdfs commands
- Ran pi and distributed shell using the default and docker runtimes
- Verified the UIs
- Verified the change log
-Shane
+1 binding
Run the following steps:
* Check md5 of sources and package.
* Run a YARN + HDFS pseudo cluster.
* Run terasuite on YARN.
* Run HDFS CLIs (ls , rm , etc)
On Mon, Nov 20, 2017 at 12:58 PM, Vinod Kumar Vavilapalli
wrote:
> Quick question.
>
> I used to be able (in
Quick question.
I used to be able (in 2.x line) to create dist tarballs (mvn clean install
-Pdist -Dtar -DskipTests -Dmaven.javadoc.skip=true) from the source being voted
on (hadoop-3.0.0-src.tar.gz).
The idea is to install HDFS, YARN, MR separately in separate root-directories
from the
Thanks for that proposal Andrew, and for not wrapping up the vote yesterday.
> In terms of downstream testing, we've done extensive
> integration testing with downstreams via the alphas
> and betas, and we have continuous integration running
> at Cloudera against branch-3.0.
Could you please
I'd definitely extend it for a few more days. I only see 3 binding +1s so far -
not a great number to brag about on our first major release in years.
Also going to nudge folks into voting.
+Vinod
> On Nov 17, 2017, at 3:26 PM, Andrew Wang wrote:
>
> Hi Arpit,
>
> I
Thanks Andrew for getting this out !
+1 (non-binding)
* Built from source on CentOS 7.3.1611, jdk1.8.0_111
* Deployed non-ha cluster and tested few EC file operations.
* Ran basic shell commands(ls, mkdir, put, get, ec, dfsadmin).
* Ran some sample jobs.
* HDFS Namenode UI looks good.
Thanks,
+1 (non-binding).
-Built from the source
-Installed the HA Cluster
-Ran basic shell commands
-Ran sample jobs like pi,Slive
Thanks Andrew for driving this.
On Wed, Nov 15, 2017 at 3:04 AM, Andrew Wang
wrote:
> Hi folks,
>
> Thanks as always to the many, many
Hi Arpit,
I agree the timing is not great here, but extending it to meaningfully
avoid the holidays would mean extending it an extra week (e.g. to the
29th). We've been coordinating with ASF PR for that Tuesday, so I'd really,
really like to get the RC out before then.
In terms of downstream
che.org>
Sent: Tuesday, November 14, 2017 3:34 PM
Subject: [VOTE] Release Apache Hadoop 3.0.0 RC0
Hi folks,
Thanks as always to the many, many contributors who helped with this
release. I've created RC0 for Apache Hadoop 3.0.0. The artifacts are
available here:
http://people.apache.org/~wa
Hi Andrew,
Thank you for your hard work in getting us to this step. This is our first
major GA release in many years.
I feel a 5-day vote window ending over the weekend before thanksgiving may not
provide sufficient time to evaluate this RC especially for downstream
components.
Would you
Thanks for the spot, normally create-release spits those out. I uploaded
asc and mds for the release artifacts.
Best,
Andrew
On Thu, Nov 16, 2017 at 11:33 PM, Akira Ajisaka wrote:
> Hi Andrew,
>
> Signatures are missing. Would you upload them?
>
> Thanks,
> Akira
>
>
> On
Hi Andrew,
Signatures are missing. Would you upload them?
Thanks,
Akira
On 2017/11/15 6:34, Andrew Wang wrote:
Hi folks,
Thanks as always to the many, many contributors who helped with this
release. I've created RC0 for Apache Hadoop 3.0.0. The artifacts are
available here:
+1 (binding)
- Verified checksums of all tarballs
- Built source with native, Java 1.8.0_131-b11 on Mac OS X 10.12.6
- Passed all S3A and ADL integration tests
- Deployed both binary and built source to a pseudo cluster, passed the
following sanity tests in insecure, SSL, and
Hi folks,
Thanks as always to the many, many contributors who helped with this
release. I've created RC0 for Apache Hadoop 3.0.0. The artifacts are
available here:
http://people.apache.org/~wang/3.0.0-RC0/
This vote will run 5 days, ending on Nov 19th at 1:30pm Pacific.
3.0.0 GA contains 291
36 matches
Mail list logo