This is an automated email from the ASF dual-hosted git repository.
stevel pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/hadoop-release-support.git
The following commit(s) were added to refs/heads/main by this push:
new 880d3a6 HADOOP-19770 Release Hadoop 3.4.3: RC0
880d3a6 is described below
commit 880d3a6fc22c5b9321bc4c4605e80131318cbfc5
Author: Steve Loughran <[email protected]>
AuthorDate: Tue Jan 27 20:23:13 2026 +0000
HADOOP-19770 Release Hadoop 3.4.3: RC0
---
README.md | 192 ++++++++++++++++++++++++-----
build.xml | 156 +++++++++--------------
src/releases/release-info-3.4.3.properties | 4 +-
src/text/core-announcement.txt | 16 +++
src/text/vote.txt | 51 +++++---
5 files changed, 268 insertions(+), 151 deletions(-)
diff --git a/README.md b/README.md
index 5914328..ac758c7 100644
--- a/README.md
+++ b/README.md
@@ -226,15 +226,24 @@ gpg --import private.key
Follow rest of process as mentioned in above HowToRelease doc.
+```sh
+git clone https://github.com/apache/hadoop.git
+cd hadoop
+git checkout --track origin/branch-3.4.3
+
+# for the arm buld: dev-support/bin/create-release --docker --dockercache
+dev-support/bin/create-release --asfrelease --docker --dockercache
+```
+
### Create a `src/releases/release-X.Y.X.properties`
Create a new release properties file, using an existing one as a template.
Update as a appropriate.
-### Update `/release.properties`
+### Update `release.properties`
-Update the value of `release.version in `/release.properties` to
+Update the value of `release.version` in `release.properties` to
declare the release version. This is used to determine the specific release
properties
file for that version.
@@ -287,7 +296,7 @@ ant clean mvn-purge
Tip: look at the output to make sure it is cleaning the artifacts from the
release you intend to validate.
-### SCP RC down to `target/incoming`
+### SCP the RC down to `target/incoming`
This will take a while! look in target/incoming for progress
@@ -305,18 +314,19 @@ ant copy-scp-artifacts release.dir.check
The `release.dir.check` target just lists the directory.
-### Build a lean binary tar
+### Sidenote: lean binary tarballs
-The normal `binary tar.gz` files huge because they install a version of the
AWS v2 SDK "bundle.jar"
+The normal `binary tar.gz` file was historically huge because they contained a
version of the AWS v2 SDK `bundle.jar`
file which has been validated with the hadoop-aws module and the S3A connector
which was built against it.
This is a really big file because it includes all the "shaded" dependencies as
well as client libraries
to talk with many unused AWS services up to and including scheduling satellite
downlink time.
-We ship the full bundle jar as it allows Hadoop and its downstream
applications to be isolated from
+We shipped the full bundle jar as it allows Hadoop and its downstream
applications to be isolated from
the choice of JAR dependencies in the AWS SDK. That is: it ensures a classpath
that works out the box
and stop having to upgrade on a schedule determined by maintains the AWS SDK
pom files.
+
It does make for big images and that has some negative consequences.
* More data to download when installing Hadoop.
* More space is required in the fileystem of any host into which it is
installed.
@@ -324,39 +334,45 @@ It does make for big images and that has some negative
consequences.
process.
* Larger container images if preinstalled.
-The "lean" `binary tar.gz` files eliminate these negative issues by being
+A "lean" tar.gz was built by stripping out the AWS SDK jar and signing the new
tarball.
+
+The "lean" `binary tar.gz` files eliminated these negative issues by being
a variant of the normal x86 binary distribution with the relevant AWS SDK JAR
removed.
-The build target `release.lean.tar` can do this once the normal x86 binaries
have been downloaded.
-```bash
-ant release.lean.tar
-```
+Since Hadoop 3.4.3 the release binaries are automatically lean, an explicit
build option of `-Dhadoop-aws-package` is needed
+to bundle the AWS JAR.
+The bundle.jar must now be added to `share/hadoop/common/lib`, and any version
later than that of the release
+can be added for automatic inclusion in the classpath.
+
+The specific AWS SDK version qualified with is still the only "safe" version
-its version number must be
+included in the release announcement.
+
+Instructions on this are included in the release announcement
-It performs the following actions:
-1. expands the binary .tar.gz under the path `target/bin-lean`
-2. deletes all files `bundle-*` from this expanded SDK
-3. builds a new binary release with the suffix `-lean.tar.gz`
-4. Generates new checksum and signatures.
### Building Arm64 binaries
-If arm64 binaries are being created then they must be
-built on an arm docker image.
+Arm64 binaries must be created on an arm docker image.
+They can be built locally or remotely (cloud server, raspberry pi5, etc.)
+
Do not use the `--asfrelease` option as this stages the JARs.
Instead use the explicit `--deploy --native --sign` options.
The arm process is one of
-1. Create the full set of artifacts on an arm machine (macbook, cloud vm, ...)
+1. Create the full set of artifacts on an arm machine (macbook, cloud vm, ...).
+ Based on our experience, doing this on a clean EC2 Ubuntu VM is more
reliable than any local laptop
2. Use the ant build to copy and rename the `.tar.gz` with the native binaries
only
3. Create a new `.asc `file.
4. Generate new `.sha512` checksum file containing the new name.
- Renaming the old file is not sufficient.
+ Renaming the old file is insufficient.
5. Move these files into the `downloads/release/$RC` dir
To perform these stages, you need a clean directory of the same
hadoop commit ID as for the x86 release.
+#### Local Arm build
+
In `build.properties` declare its location
```properties
@@ -369,10 +385,8 @@ next step if `ant arm.release` process after this.
create the release.
-The ant `arm.create.release` target is broken until someone fixes
HADOOP-18664. you can't launch create-release --docker from a build file
-
```bash
-time dev-support/bin/create-release --docker --dockercache
--mvnargs="-Dhttp.keepAlive=false -Dmaven.wagon.http.pool=false" --deploy
--native --sign
+time dev-support/bin/create-release --docker --dockercache --deploy --native
--sign
```
*Important* make sure there is no duplicate staged hadoop repo in nexus.
@@ -384,11 +398,51 @@ If there is: drop and restart the x86 release process to
make sure it is the one
# copy the artifacts to this project's target/ dir, renaming
ant arm.copy.artifacts
# sign artifacts then move to the shared RC dir alongside the x86 artifacts
-ant arm.release release.dir.check
+ant arm.sign.artifacts release.dir.check
+```
+
+#### Arm remote build and scp download
+
+Create the remote build on an arm server.
+
+```bash
+time dev-support/bin/create-release --docker --dockercache --deploy --native
--sign
```
+| name | value |
+|----------------------|---------------------------------|
+| `arm.scp.hostname` | hostname of arm server |
+| `arm.scp.user` | username of arm server |
+| `arm.scp.hadoop.dir` | path under user homedir |
+
+Download the artifacts
+
+```bash
+ant arm.scp-artifacts
+```
+This downloads the artifacts to `downloads/arm/incoming`.
-### Copy to a staging location in the hadoop SVN repository.
+Copy and rename the binary tar file.
+
+```bash
+ant arm.scp.copy.artifacts
+```
+
+#### arm signing
+
+```bash
+ant arm.sign.artifacts
+```
+
+Make sure that the log shows the GPG key used and that it matches that used
for the rest
+of the build.
+
+```
+[gpg] gpg: using "38237EE425050285077DB57AD22CF846DBB162A0" as default secret
key for signing
+```
+### Publishing the RC
+
+Publish the RC by copying it to a staging location in the hadoop SVN
repository.
When committed to subversion it will be uploaded and accessible via a
https://svn.apache.org URL.
@@ -398,16 +452,45 @@ This makes it visible to others via the apache svn site,
but it
is not mirrored yet.
When the RC is released, an `svn move` operation can promote it
-directly.
+directly to the release directory, from where it will be served at mirror
locations.
*do this after preparing the arm64 binaries*
-Final review of the release files
+Final review the release files to make sure the -aarch64.tar.gz is present
along with the rest,
+and that everything is signed and checksummed.
+
```bash
ant release.dir.check
```
+```
+release.dir.check:
+ [echo]
release.dir=/home/stevel/Projects/client-validator/downloads/hadoop-3.4.3-RC0
+ [x] total 2179608
+ [x] -rw-r--r--@ 1 stevel staff 10667 Jan 27 19:54 CHANGELOG.md
+ [x] -rw-r--r--@ 1 stevel staff 833 Jan 27 19:54
CHANGELOG.md.asc
+ [x] -rw-r--r--@ 1 stevel staff 153 Jan 27 19:54
CHANGELOG.md.sha512
+ [x] -rw-r--r--@ 1 stevel staff 511380916 Jan 27 20:02
hadoop-3.4.3-aarch64.tar.gz
+ [x] -rw-r--r--@ 1 stevel staff 833 Jan 27 20:02
hadoop-3.4.3-aarch64.tar.gz.asc
+ [x] -rw-r--r--@ 1 stevel staff 168 Jan 27 20:02
hadoop-3.4.3-aarch64.tar.gz.sha512
+ [x] -rw-r--r--@ 1 stevel staff 2302464 Jan 27 19:54
hadoop-3.4.3-rat.txt
+ [x] -rw-r--r--@ 1 stevel staff 833 Jan 27 19:54
hadoop-3.4.3-rat.txt.asc
+ [x] -rw-r--r--@ 1 stevel staff 161 Jan 27 19:54
hadoop-3.4.3-rat.txt.sha512
+ [x] -rw-r--r--@ 1 stevel staff 42682959 Jan 27 19:54
hadoop-3.4.3-site.tar.gz
+ [x] -rw-r--r--@ 1 stevel staff 833 Jan 27 19:54
hadoop-3.4.3-site.tar.gz.asc
+ [x] -rw-r--r--@ 1 stevel staff 165 Jan 27 19:54
hadoop-3.4.3-site.tar.gz.sha512
+ [x] -rw-r--r--@ 1 stevel staff 39418511 Jan 27 19:54
hadoop-3.4.3-src.tar.gz
+ [x] -rw-r--r--@ 1 stevel staff 833 Jan 27 19:54
hadoop-3.4.3-src.tar.gz.asc
+ [x] -rw-r--r--@ 1 stevel staff 164 Jan 27 19:54
hadoop-3.4.3-src.tar.gz.sha512
+ [x] -rw-r--r--@ 1 stevel staff 509684174 Jan 27 19:54
hadoop-3.4.3.tar.gz
+ [x] -rw-r--r--@ 1 stevel staff 833 Jan 27 19:54
hadoop-3.4.3.tar.gz.asc
+ [x] -rw-r--r--@ 1 stevel staff 160 Jan 27 19:54
hadoop-3.4.3.tar.gz.sha512
+ [x] -rw-r--r--@ 1 stevel staff 3495 Jan 27 19:54 RELEASENOTES.md
+ [x] -rw-r--r--@ 1 stevel staff 833 Jan 27 19:54
RELEASENOTES.md.asc
+ [x] -rw-r--r--@ 1 stevel staff 156 Jan 27 19:54
RELEASENOTES.md.sha512
+```
+
Now stage the files, first by copying the dir of release artifacts
into the svn-mananaged location
```bash
@@ -442,6 +525,35 @@ a tag.
```bash
ant print-tag-command
```
+Which lists commands like
+```
+print-tag-command:
+[echo] git.commit.id=56b832dfd5
+[echo]
+[echo] # command to tag the commit
+[echo] git tag -s release-3.4.3-RC0 -m "Release candidate 3.4.3-RC0"
56b832dfd5
+[echo]
+[echo] # how to verify it
+[echo] git tag -v release-3.4.3-RC0
+[echo]
+[echo] # how to view the log to make sure it really is the right commit
+[echo] git log tags/release-3.4.3-RC0
+[echo]
+[echo] # how to push to apache
+[echo] git push apache release-3.4.3-RC0
+[echo]
+[echo] # if needed, how to delete locally
+[echo] git tag -d release-3.4.3-RC0
+[echo]
+[echo] # if needed, how to delete it from apache
+[echo] git push --delete apache release-3.4.3-RC0
+[echo]
+[echo] # tagging the final release
+[echo] git tag -s rel/release-3.4.3 -m "HADOOP-19770. Hadoop 3.4.3
release"
+[echo] git push origin rel/release-3.4.3
+[echo]
+```
+From the output, go through the steps to tag, verify, view and then push
### Prepare the maven repository
@@ -486,10 +598,12 @@
amd.src.dir=https://dist.apache.org/repos/dist/dev/hadoop/hadoop-${hadoop.versio
The property `category` controls what suffix to use when downloading artifacts.
The default value, "", pulls in the full binaries.
If set to `-lean` then lean artifacts are downloaded and validated.
+(note: this is obsolete but retained in case it is needed for arm64 validation)
```
category=-lean
```
+
### Targets of Relevance
| target | action
|
@@ -505,7 +619,6 @@ category=-lean
| `release.bin.commands` | execute a series of commands against the untarred
binaries |
| `release.site.untar` | untar the downloaded site artifact
|
| `release.site.validate` | perform minimal validation of the site.
|
-| `release.lean.tar` | create a release of the x86 binary tar without the
AWS SDK |
set `check.native.binaries` to false to skip native binary checks on platforms
without them
@@ -617,7 +730,8 @@ The ant build itself will succeed, even if the
`checknative` command reports a f
## Cloud connector integration tests
-To test cloud connectors you need the relevant credentials copied into place
into their `src/test/resources` subdirectory, as covered in the appropriate
documentation for each component.
+To test cloud connectors you need the relevant credentials copied into place
into their
+`src/test/resources` subdirectory, as covered in the appropriate documentation
for each component.
The location of this file must be defined in the property `auth-keys.xml`.
@@ -948,7 +1062,7 @@ Use the "tagging the final release" commands printed
2. Announce on hadoop-general as well as developer lists.
-## clean up your local system
+## Clean up your local system
For safety, purge your maven repo of all versions of the release, so
as to guarantee that everything comes from the production store.
@@ -957,7 +1071,7 @@ as to guarantee that everything comes from the production
store.
ant mvn-purge
```
-# tips
+# Tips
## Git status prompt issues in fish
@@ -1085,3 +1199,19 @@ Just expect to be required to justify changes after the
fact.
* Contributions by non-committers should be submitted as github PRs.
* Contributions by committers MAY be just done as commits to the main branch.
* The repo currently supports forced push to the main branch. We may need to
block this
+
+
+# What can go wrong?
+
+## Disconnection from remote system during build.
+
+Docker should keep going. Use the `tmux` tool to maintain terminal sessions
over interruptions.
+
+## Multiple staging repositories in Nexus
+
+If the Arm and x86 builds were running at the same time with `-asfrelease` or
`-deploy` then the separate builds will have created their own repo.
+Abort the process, drop the repositories and rerun the builds, sequentially,
and only one set to create the staging repositories.
+
+If a single host was building, then possibly network access came to the ASF
Nexus server by multiple IP Addresses (i.e a VPN was involved).
+If this happened then both repositories are incomplete.
+Abort the build and retry. It may be that your network setup isn't going to
work at all. The only fix there is: build somewhere else.
diff --git a/build.xml b/build.xml
index 041d943..0583618 100644
--- a/build.xml
+++ b/build.xml
@@ -204,8 +204,11 @@
location="${mvn.repo}/org/apache/hadoop"/>
<!-- ARM stuff -->
+ <!-- incoming dir for an scp download of arm artifacts-->
+ <setpath name="arm.incoming.dir" location="${downloads.dir}/arm/incoming"/>
+ <!-- local build dir -->
<setpath name="arm.artifact.dir"
location="${arm.hadoop.dir}/target/artifacts/" />
- <setpath name="arm.dir" location="${downloads.dir}/arm" />
+ <setpath name="arm.dir" location="${downloads.dir}/arm/prepare" />
<set name="arm.binary.prefix" value="hadoop-${hadoop.version}-aarch64" />
<set name="arm.binary.filename"
value="${arm.binary.prefix}${category}.tar.gz" />
<setpath name="arm.binary.src"
location="${arm.artifact.dir}/hadoop-${hadoop.version}.tar.gz" />
@@ -482,7 +485,7 @@
<!-- ========================================================= -->
- <!-- When building on remote systems (EC2 etc) this pulls down -->
+ <!-- When building on remote systems (EC2 etc.) this pulls down -->
<!-- the artifacts for the next stages -->
<!-- ========================================================= -->
@@ -511,11 +514,10 @@
<target name="copy-scp-artifacts" depends="init"
description="copy the downloaded artifacts from incoming to release dir">
- <delete dir="${release.dir}"/>
<copy todir="${release.dir}">
<fileset dir="${incoming.dir}/artifacts" includes="*" />
</copy>
- <echo>copies scp downloaded artifacts to ${release.dir}</echo>
+ <echo>copied scp downloaded artifacts to ${release.dir}</echo>
</target>
<!-- list whatever is in the release dir -->
@@ -1301,82 +1303,6 @@ Message is in file ${message.out}
</target>
- <!--
- Create a version of the x86 tar.gz file without the aws bundle.jar.
- If all systems had gnu tar, this would be a simple tar -delete
- call, but bsd systems are lacking here.
-
- Instead the workflow is
- * untar
- * delete
- * retar
- * gzip
- -->
- <target name="release.lean.tar" depends="init"
- description="create a lean version of the x86 binary release">
- <verify-release-dir />
- <!-- files to eventually release -->
- <setpath name="lean.tar.gz"
location="${release.dir}/${release}-lean.tar.gz" />
- <setpath name="lean.tar.gz.sha512" location="${lean.tar.gz}.sha512" />
- <setpath name="lean.tar.gz.asc" location="${lean.tar.gz}.asc" />
-
- <!-- intermediate files -->
- <setpath name="lean.dir" location="target/bin-lean" />
- <setpath name="lean.tar" location="${lean.dir}/${release}-lean.tar" />
-
- <delete dir="${lean.dir}" />
- <mkdir dir="${lean.dir}" />
- <gunzip src="${release.dir}/${release.binary.filename}"
dest="${lean.dir}"/>
-
- <echo>Untarring ${lean.dir}/${release}.tar</echo>
- <!-- use the native command to preserve properties -->
- <x executable="tar" dir="${lean.dir}" >
- <arg value="-xf" />
- <arg value="${release}.tar" />
- </x>
- <!-- delete the bundle -->
- <delete>
- <fileset dir="${lean.dir}" includes="**/bundle-*.jar"/>
- </delete>
- <!-- retar -->
- <echo>Creating new tar ${lean.dir}/${lean.tar}</echo>
- <x executable="tar" dir="${lean.dir}" >
- <arg value="-cf" />
- <arg value="${lean.tar}" />
- <arg value="${release}" />
- </x>
- <gzip src="${lean.tar}" destfile="${lean.tar.gz}"/>
-
- <!--
- sign it
- -->
- <delete file="${lean.tar.gz.sha512}" />
- <sha512 file="${lean.tar.gz}" />
- <loadfile srcfile="${lean.tar.gz.sha512}" property="lean.sha"/>
- <echo>Contents of ${lean.tar.gz.sha512}
-${lean.sha}
- </echo>
-
- <require-file path="${lean.tar.gz}" />
- <echo>Signing ${lean.tar.gz}</echo>
- <delete file="${lean.tar.gz.asc}" />
-
- <gpg dir="${release.dir}">
- <arg value="--detach-sign" />
- <arg value="-a" />
- <arg value="${lean.tar.gz}" />
- </gpg>
- <loadfile srcfile="${lean.tar.gz.asc}" property="lean.asc"/>
- <echo>Contents of ${lean.tar.gz.asc}
-${lean.asc}
- </echo>
-
- <echo>
- Lean x86 Binary release is ${lean.tar.gz}
- </echo>
- </target>
-
-
<target name="release.arm.untar" depends="release.dir.check"
description="untar the ARM binary release">
@@ -1608,31 +1534,58 @@ ${lean.asc}
-->
<!-- ========================================================= -->
- <!--
- create the arm distro
- -->
- <target name="arm.create.release" depends="init"
- description="create an arm native distro -no asf staging">
- <echo>Creating ARM release in ${arm.hadoop.dir} </echo>
- <x executable="time" dir="${arm.hadoop.dir}">
- <arg value="dev-support/bin/create-release"/>
- <arg value="--docker"/>
- <arg value="--dockercache"/>
- <arg value="--deploy"/>
- <arg value="--native"/>
- <arg value="--sign"/>
- <arg value="--mvnargs=-Dhttp.keepAlive=false
-Dmaven.wagon.http.pool=false"/>
+ <!-- ========================================================= -->
+ <!-- When building on remote systems (EC2 etc.) this pulls down -->
+ <!-- the artifacts for the next stages -->
+ <!-- ========================================================= -->
+
+
+ <target name="arm.scp-artifacts" depends="init"
+ description="scp the artifacts from a remote host. may be slow">
+ <fail unless="arm.scp.hostname"/>
+ <fail unless="arm.scp.user"/>
+ <fail unless="arm.scp.hadoop.dir"/>
+ <fail unless="arm.incoming.dir"/>
+
+ <set name="arm.scp.source"
+
value="${arm.scp.user}@${arm.scp.hostname}:${arm.scp.hadoop.dir}/target/artifacts"/>
+
+ <delete dir="${arm.incoming.dir}"/>
+ <mkdir dir="${arm.incoming.dir}"/>
+ <echo>Downloading Arm artifacts to ${arm.incoming.dir}; may take a
while</echo>
+ <!-- scp -r $srv:hadoop/target/artifacts ~/Projects/Releases
+ -->
+ <x executable="scp">
+ <arg value="-r"/>
+ <arg value="${arm.scp.source}"/>
+ <arg value="${arm.incoming.dir}"/>
</x>
</target>
- <!-- copy the arm binaries into downloads/arm with their final filenames -->
- <target name="arm.copy.artifacts" depends="init"
- description="copy the arm binary and .asc files">
+
+ <!-- copy the arm binary into downloads/arm with its final filename -->
+ <target name="arm.scp.copy.artifacts" depends="init"
+ description="copy the downloaded arm binary file">
+ <delete dir="${arm.dir}" />
+ <mkdir dir="${arm.dir}" />
+ <setpath name="arm.scp.binary.src"
location="${arm.incoming.dir}/artifacts/hadoop-${hadoop.version}.tar.gz" />
+
+ <echo>source artifact is ${arm.scp.binary.src}</echo>
+ <copy file="${arm.scp.binary.src}" tofile="${arm.binary}" />
+ <x executable="ls">
+ <arg value="-l"/>
+ <arg value="${arm.dir}"/>
+ </x>
+ </target>
+
+
+ <!-- copy the arm binary into downloads/arm with its final filename -->
+ <target name="arm.copy.local.artifacts" depends="init"
+ description="copy the local arm binary and .asc files">
<delete dir="${arm.dir}" />
<mkdir dir="${arm.dir}" />
<echo>source artifact is ${arm.binary.src}</echo>
<copy file="${arm.binary.src}" tofile="${arm.binary}" />
- <!-- <copy file="${arm.binary.src}.asc" tofile="${arm.binary.asc}" />-->
<x executable="ls">
<arg value="-l"/>
<arg value="${arm.dir}"/>
@@ -1663,6 +1616,13 @@ ${arm.sha}
${arm.asc}
</echo>
+ <copy todir="${release.dir}" >
+ <fileset file="${arm.binary}" />
+ <fileset file="${arm.binary.sha512}" />
+ <fileset file="${arm.binary.asc}" />
+ </copy>
+
+
</target>
<!-- Third party release assistance -->
diff --git a/src/releases/release-info-3.4.3.properties
b/src/releases/release-info-3.4.3.properties
index e8c8f60..ddf41b9 100644
--- a/src/releases/release-info-3.4.3.properties
+++ b/src/releases/release-info-3.4.3.properties
@@ -20,7 +20,7 @@ rc=RC0
#category=-lean
previous.version=3.4.2
release.branch=3.4.3
-git.commit.id=94e5aa1ce6d0
+git.commit.id=56b832dfd5
aws.sdk2.version=2.35.4
# HADOOP-19770 Release Hadoop 3.4.3
@@ -30,5 +30,5 @@ jira.title=Release Hadoop 3.4.3
amd.src.dir=https://dist.apache.org/repos/dist/dev/hadoop/3.4.3-RC0
arm.src.dir=${amd.src.dir}
http.source=${amd.src.dir}
-asf.staging.url=https://repository.apache.org/content/repositories/orgapachehadoop-1443
+asf.staging.url=https://repository.apache.org/content/repositories/orgapachehadoop-1461
diff --git a/src/text/core-announcement.txt b/src/text/core-announcement.txt
index ce6abf3..c4d5d83 100644
--- a/src/text/core-announcement.txt
+++ b/src/text/core-announcement.txt
@@ -15,3 +15,19 @@ and [changelog][3].
[1]: http://hadoop.apache.org/docs/r${ver}/index.html
[2]:
http://hadoop.apache.org/docs/r${ver}/hadoop-project-dist/hadoop-common/release/${ver}/RELEASENOTES.${ver}.html
[3]:
http://hadoop.apache.org/docs/r${ver}/hadoop-project-dist/hadoop-common/release/${ver}/CHANGELOG.${ver}.html
+
+This release does not include the bundle.jar containing the AWS SDK, used by
the s3a connector
+in the hadoop-aws module.
+To use it, download from Maven Central the version of the SDK you wish to use:
+
+https://central.sonatype.com/artifact/software.amazon.awssdk/bundle/versions
+
+For this release, the version to download is ${aws.sdk2.version}
+https://repo1.maven.org/maven2/software/amazon/awssdk/bundle/${aws.sdk2.version}/)
+
+1. Download the bundle-${aws.sdk2.version}.jar artifact and check its
signature with
+ the accompanying bundle-${aws.sdk2.version}.jar.asc file.
+
+2. Copy the JAR to share/hadoop/common/lib/
+
+(Newer AWS SDK versions should work, though regressions are almost inevitable)
diff --git a/src/text/vote.txt b/src/text/vote.txt
index 36b0eac..2ee857a 100644
--- a/src/text/vote.txt
+++ b/src/text/vote.txt
@@ -24,25 +24,36 @@
https://dist.apache.org/repos/dist/dev/hadoop/${rc.dirname}/CHANGELOG.md
Release notes
https://dist.apache.org/repos/dist/dev/hadoop/${rc.dirname}/RELEASENOTES.md
-This is off branch-3.4.1
-
-Key changes include
-
-* Bulk Delete API. https://issues.apache.org/jira/browse/HADOOP-18679
-* Fixes and enhancements in Vectored IO API.
-* Improvements in Hadoop Azure connector.
-* Fixes and improvements post upgrade to AWS V2 SDK in S3AConnector.
-* This release includes Arm64 binaries. Please can anyone with
- compatible systems validate these.
-
-Note, because the arm64 binaries are built separately on a different
-platform and JVM, their jar files may not match those of the x86
-release -and therefore the maven artifacts. I don't think this is
-an issue (the ASF actually releases source tarballs, the binaries are
-there for help only, though with the maven repo that's a bit blurred).
-
-The only way to be consistent would actually untar the x86.tar.gz,
-overwrite its binaries with the arm stuff, retar, sign and push out
-for the vote. Even automating that would be risky.
+Build note: the maven artifacts are off the aarch64 release, not the x86;
+single builds on ec2 VMs through our cloud infra kept resulting in multiple
staging repos,
+probably a side effect of our VPN setup.
+
+A raspberry pi5 is perfectly adequate to cut a release, even with just an SD
Card as the storage.
+I built the x86 release remotely, though as I have an 2016 ubuntu laptop I
could try there too.
+
+
+AWS SDK
+-------
+
+Previous releases included a "lean" tar without the AWS SDK, and/or encountered
+problems with the size of the .tar artifacts.
+
+Now all releases are built without the AWS SDK; it must be explicitly added to
+share/hadoop/common/lib/
+
+To add aws support to hadoop, download from Maven Central the version of the
SDK
+you wish to use:
+
+https://central.sonatype.com/artifact/software.amazon.awssdk/bundle/versions
+
+For this release, the version to download is ${aws.sdk2.version}
+https://repo1.maven.org/maven2/software/amazon/awssdk/bundle/${aws.sdk2.version}/)
+
+1. Download the bundle-${aws.sdk2.version}.jar artifact and check its
signature with
+ the accompanying bundle-${aws.sdk2.version}.jar.asc file.
+
+2. Copy the JAR to share/hadoop/common/lib/
+
+Newer AWS SDK versions _should_ work, though regressions are almost inevitable.
Please try the release and vote. The vote will run for 5 days.
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]