This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/hadoop-release-support.git

commit d5fe2c6581dd96b0ad8a496d37e276f1fe7ca5ed
Author: Steve Loughran <ste...@cloudera.com>
AuthorDate: Fri Mar 24 14:29:50 2023 +0000

    Hadoop 3.3.5 ships
---
 README.md                            | 121 ++++++++++++++++++++++++++++++-----
 build.xml                            |  78 ++++++++++++++++++----
 src/text/announcement.txt            |  19 +-----
 src/text/core-announcement.txt       |  43 +++++++++++++
 src/text/user-email-announcement.txt |   2 +-
 5 files changed, 216 insertions(+), 47 deletions(-)

diff --git a/README.md b/README.md
index 64e752f..2dae02c 100644
--- a/README.md
+++ b/README.md
@@ -78,10 +78,10 @@ Instead use the explicit `--deploy --native --sign` options.
 
 The arm process is one of
 1. Create the full set of artifacts on an arm machine (macbook, cloud vm, ...)
-1. Use the ant build to copy and rename the `.tar.gz` with the native binaries 
only
-1. Create a new `.asc `file.
-1. Generate new sha512 checksum file containing the new name.
-1. Move these files into the `downloads/release/$RC` dir
+2. Use the ant build to copy and rename the `.tar.gz` with the native binaries 
only
+3. Create a new `.asc `file.
+4. Generate new sha512 checksum file containing the new name.
+5. Move these files into the `downloads/release/$RC` dir
 
 To perform these stages, you need a clean directory of the same
 hadoop commit ID as for the x86 release.
@@ -198,13 +198,13 @@ repeat all the testing of downstream projects, this time
 validating the staged artifacts, rather than any build
 locally.
 
-# How to download and build someone else's release candidate
+# How to download and build a staged release candidate
 
 In build properties, declare `hadoop.version`, `rc` and `http.source`
 
 ```properties
 hadoop.version=3.3.5
-rc=2
+rc=3
 
http.source=https://dist.apache.org/repos/dist/dev/hadoop/hadoop-${hadoop.version}-RC${rc}/
 ```
 
@@ -218,7 +218,7 @@ 
http.source=https://dist.apache.org/repos/dist/dev/hadoop/hadoop-${hadoop.versio
 | `release.src.build`  | build the source           |
 | `release.src.test`   | build and test the source  |
 | `gpg.keys`           | import the hadoop KEYS     |
-| `gpg.verify `        | verify the D/L'd artifacts |
+| `gpg.verify`         | verify the D/L'd artifacts |
 |                      |                            |
 
 set `check.native.binaries` to false to skip native binary checks on platforms 
without them
@@ -349,12 +349,21 @@ Spark itself does not include any integration tests of 
the object store connecto
 This independent module tests the s3a, gcs and abfs connectors,
 and associated committers, through the spark RDD and SQL APIs.
 
+
 [cloud integration](https://github.com/hortonworks-spark/cloud-integration)
 ```bash
 ant cloud-examples.build
 ant cloud-examples.test
 ```
 
+
+The test run is fairly tricky to get running; don't try and do this while
+* MUST be java 11+
+* Must have `cloud.test.configuration.file` set to an XML conf file
+  declaring the auth credentials and stores to use for the target object stores
+  (s3a, abfs, gcs)
+
+
 ## HBase filesystem
 
 [hbase-filesystem](https://github.com/apache/hbase-filesystem.git)
@@ -368,7 +377,26 @@ Integration tests will go through S3A connector.
 ant hboss.build
 ```
 
-## building the Hadoop site
+# After the Vote Succeeds: publishing the release
+
+## Update the announcement and create site/email notifications
+
+Edit `src/text/announcement.txt` to have an up-to-date
+description of the release.
+
+The `release.site.announcement` target will generate these
+annoucements. Execute the target and then review
+the generated files in `target/`
+
+```bash
+ant release.site.announcement
+```
+
+The announcement must be geneated before the next stage,
+so make sure the common body of the site and email
+annoucement is up to date: `src/text/core-announcement.txt`
+
+## Build the Hadoop site
 
 Set `hadoop.site.dir` to be the path of the
 local clone of the ASF site repository
@@ -378,21 +406,30 @@ https://gitbox.apache.org/repos/asf/hadoop-site.git
 hadoop.site.dir=/Users/stevel/hadoop/release/hadoop-site
 ```
 
-Prepare the site with the following targets
+Prepare the site; this also demand-generates the release announcement
+
+The site .tar.gz distributable is used for the site; this must already
+have been downloaded. It must be untarred and copied under the
+SCM-managed `${hadoop.site.dir}` repository, linked up
+and then committed.
 
 ```bash
-ant release.site.announcement
+ant release.site.untar
+
 ant release.site.docs
 ```
 
-Review the annoucement.
+Review the announcement.
 
-### Manually link the current/stable symlinks to the new release
+### Manually link the site current/stable symlinks to the new release
 
-In the hadoop site dir
+In the hadoop site dir content/docs subdir
 
 ```bash
 
+# update
+git pull
+
 # review current status
 ls -l
 
@@ -402,14 +439,66 @@ ln -s r.3.3.5 current3
 
 # symlink stable
 rm stable3
-ln -s r3.3.5 stable
 ln -s r3.3.5 stable3
 
 # review new status
 ls -l
 ```
 
-### Git status prompt issues in fish
+
+Finally, *commit*
+
+```bash
+git add .
+git status
+git commit -S -m "HADOOP-18470. Release Hadoop 3.3.5"
+git push
+```
+
+
+## Promoting the RC artifacts to production through `svn move`
+
+```bash
+
+# check that the source and dest URLs are good
+ant staging-init
+# do the promotion
+ant stage-move-to-production
+```
+
+## update the `current` ref
+
+```bash
+https://dist.apache.org/repos/dist/release/hadoop/common
+change the 
+```
+Check that release URL in your browser.
+
+## Publish nexus artifacts
+
+do this at 
[https://repository.apache.org/#stagingRepositories](https://repository.apache.org/#stagingRepositories)
+
+to verify this is visible
+[search for 
hadoop-common](https://repository.apache.org/#nexus-search;quick~hadoop-common)
+-verify the latest version is in the production repository.
+
+## Send that email announcement
+
+
+### tag the final release and push that tag
+
+The ant `print-tag-command` prints the command needed to create and sign
+a tag.
+
+```bash
+ant print-tag-command
+```
+
+Use the "tagging the final release" commands printed
+
+# tips
+
+## Git status prompt issues in fish
 
 There are a lot of files, and if your shell has a prompt which shoes the git 
repo state, scanning can take a long time.
 Disable it, such as for fish:
@@ -418,8 +507,6 @@ Disable it, such as for fish:
 set -e __fish_git_prompt_showdirtystate
 ```
 
-Finally, *commit*
-
 ## Adding a global maven staging profile `asf-staging`
 
 Many projects have a profile to use a staging repository, especially the ASF 
one.
diff --git a/build.xml b/build.xml
index cef33fa..b18c605 100644
--- a/build.xml
+++ b/build.xml
@@ -57,13 +57,13 @@
   <property name="release.branch" value="3.3"/>
 
 
-  <property name="git.commit.id" value="3262495904d"/>
+  <property name="git.commit.id" value="706d88266ab"/>
   <property name="jira.id" value="HADOOP-18470"/>
 
 
 
   <!-- for spark builds -->
-  <property name="spark.version" value="3.4.0-SNAPSHOT"/>
+  <property name="spark.version" value="3.5.0-SNAPSHOT"/>
   <!--  spark excludes hadoop-aws dependency and forces in their own
         this fixes it to be in sync with hadoop
         see https://issues.apache.org/jira/browse/SPARK-39969
@@ -74,7 +74,6 @@
   <property name="release" value="hadoop-${hadoop.version}"/>
   <property name="rc.dirname" value="${release}-${rc}"/>
   <property name="release.dir" location="${downloads.dir}/${rc.dirname}"/>
-  <property name="staged.artifacts.dir" 
location="${staging.dir}/${rc.dirname}"/>
 
   <property name="tag.name" value="release-${rc.name}"/>
 <!--  <property name="nexus.staging.url"
@@ -88,14 +87,20 @@
   <property name="arm.artifact.dir" 
location="${arm.hadoop.dir}/target/artifacts/" />
   <property name="arm.dir" location="${downloads.dir}/arm" />
   <property name="arm.binary.src" 
location="${arm.artifact.dir}/hadoop-${hadoop.version}.tar.gz" />
-  <property name="arm.binary.prefix" value="hadoop-arm64-${hadoop.version}" />
+  <property name="arm.binary.prefix" value="hadoop-${hadoop.version}-aarch64" 
/>
   <property name="arm.binary.filename" value="${arm.binary.prefix}.tar.gz" />
   <property name="arm.binary" location="${arm.dir}/${arm.binary.filename}" />
   <property name="arm.binary.sha512" location="${arm.binary}.sha512" />
   <property name="arm.binary.asc" location="${arm.binary}.asc" />
-  <property name="staging.commit.msg" value="${jira.id}. Hadoop ${rc.name} 
built from ${git.commit.id}" />
 
+  <property name="staged.artifacts.dir" 
location="${staging.dir}/${rc.dirname}"/>
+
+  <property name="staging.commit.msg" value="${jira.id}. Hadoop ${rc.name} 
built from ${git.commit.id}" />
 
+  <property name="svn.apache.dist" value="https://dist.apache.org/"/>
+  <property name="svn.staging.url" 
value="${svn.apache.dist}/repos/dist/dev/hadoop/${rc.dirname}"/>
+  <property name="svn.production.url" 
value="${svn.apache.dist}/repos/dist/release/hadoop/common/${release}"/>
+  <property name="production.commit.msg" value="${jira.id}. Releasing Hadoop 
${hadoop.version}" />
 
   <target name="init">
 
@@ -350,11 +355,23 @@
     </echo>
   </target>
 
-  <target name="stage-to-svn"
-    description="stage the RC into svn"
+  <target name="staging-init"
+    description="init svn staging"
     depends="init">
     <fail unless="jira.id"/>
     <fail unless="git.commit.id"/>
+    <echo>
+      staging.commit.msg = ${staging.commit.msg}
+      production.commit.msg = ${production.commit.msg}
+      svn.staging.url = ${svn.staging.url}
+      svn.production.url = ${svn.production.url}
+    </echo>
+  </target>
+
+  <target name="stage-to-svn"
+    description="stage the RC into svn"
+    depends="staging-init">
+
 
     <svn dir="${staging.dir}">
       <arg value="update" />
@@ -374,9 +391,7 @@
 
   <target name="stage-svn-rollback"
     description="rollback a version staged to RC"
-    depends="init">
-    <fail unless="jira.id"/>
-    <fail unless="git.commit.id"/>
+    depends="staging-init">
 
     <svn dir="${staging.dir}">
       <arg value="update" />
@@ -395,13 +410,42 @@
 
   <target name="stage-svn-log"
     description="print the staging svn repo log"
-    depends="init">
+    depends="staging-init">
 
     <svn dir="${staging.dir}">
       <arg value="log" />
     </svn>
   </target>
 
+  <target name="stage-move-to-production"
+    description="promote the staged the RC into dist"
+    depends="staging-init">
+
+    <svn dir="${staging.dir}">
+      <arg value="update" />
+    </svn>
+     <svn dir="${staging.dir}">
+      <arg value="info" />
+      <arg value="${svn.staging.url}" />
+    </svn>
+
+    <echo>Comitting with message ${production.commit.msg}. Please wait</echo>
+
+    <svn dir="${staging.dir}">
+      <arg value="move" />
+      <arg value="${svn.staging.url}" />
+      <arg value="${svn.production.url}" />
+      <arg value="-m" />
+      <arg value="${production.commit.msg}" />
+    </svn>
+    <svn dir="${staging.dir}">
+      <arg value="commit" />
+      <arg value="-m" />
+      <arg value="${production.commit.msg}" />
+    </svn>
+  </target>
+
+
   <target name="print-tag-command"
     description="print the git command to tag the rc"
     depends="init">
@@ -424,6 +468,10 @@
 
       # if needed, how to delete it from apache
       git push --delete apache ${tag.name}
+
+      # tagging the final release
+      git tag -s rel/release-${hadoop.version} -m "${jira.id}. Hadoop 
${hadoop.version} release"
+      git push origin rel/release-${hadoop.version}
     </echo>
   </target>
 
@@ -538,6 +586,8 @@ Message is in file ${message.out}
     <mvn dir="${cloud-examples.dir}">
       <arg value="-Psnapshots-and-staging"/>
       <arg value="-Dspark-3.4"/>
+      <arg value="-Pscale"/>
+      <arg value="-Dscale.test.enabled=true"/>
       <arg value="-Dspark.version=${spark.version}"/>
       <arg value="-Dhadoop.version=${hadoop.version}"/>
       <arg 
value="-Dcloud.test.configuration.file=${cloud.test.configuration.file}"/>
@@ -977,6 +1027,12 @@ Message is in file ${message.out}
     description="build site announcement"
     depends="release.site.prepare">
 
+    <loadfile property="core-announcement.txt"
+      srcFile="src/text/core-announcement.txt">
+      <filterchain>
+        <expandproperties/>
+      </filterchain>
+    </loadfile>
     <loadfile property="announcement.txt"
       srcFile="src/text/announcement.txt">
       <filterchain>
diff --git a/src/text/announcement.txt b/src/text/announcement.txt
index db79deb..219caca 100644
--- a/src/text/announcement.txt
+++ b/src/text/announcement.txt
@@ -17,21 +17,4 @@ linked: true
   limitations under the License. See accompanying LICENSE file.
 -->
 
-This is a release of Apache Hadoop ${release.branch} line.
-
-It contains a small number of security and critical integration fixes since 
${previous.ver}.
-
-Users of Apache Hadoop ${previous.ver} should upgrade to this release.
-
-Users of hadoop 2.x and hadoop 3.2 should also upgrade to the 3.3.x line.
-As well as feature enhancements, this is the sole branch currently
-receiving fixes for anything other than critical security/data integrity
-issues.
-
-Users are encouraged to read the [overview of major changes][1] since release 
${previous.ver}.
-For details of bug fixes, improvements, and other enhancements since the 
previous ${previous.ver} release,
-please check [release notes][2] and [changelog][3].
-
-[1]: http://hadoop.apache.org/docs/r${ver}/index.html
-[2]: 
http://hadoop.apache.org/docs/r${ver}/hadoop-project-dist/hadoop-common/release/${ver}/RELEASENOTES.${ver}.html
-[3]: 
http://hadoop.apache.org/docs/r${ver}/hadoop-project-dist/hadoop-common/release/${ver}/CHANGELOG.${ver}.html
+${core-announcement.txt}
diff --git a/src/text/core-announcement.txt b/src/text/core-announcement.txt
new file mode 100644
index 0000000..524f5db
--- /dev/null
+++ b/src/text/core-announcement.txt
@@ -0,0 +1,43 @@
+
+This is a release of Apache Hadoop ${release.branch} line.
+
+Key changes include
+
+* A big update of dependencies to try and keep those reports of
+  transitive CVEs under control -both genuine and false positives.
+* Critical fix to ABFS input stream prefetching for correct reading.
+* Vectored IO API for all FSDataInputStream implementations, with
+  high-performance versions for file:// and s3a:// filesystems.
+  file:// through java native IO
+  s3a:// parallel GET requests.
+* Arm64 binaries. Note, because the arm64 release was on a different
+  platform, the jar files may not match those of the x86
+  release -and therefore the maven artifacts.
+* Security fixes in Hadoop's own code.
+
+Users of Apache Hadoop ${previous.ver} and earlier should upgrade to
+this release.
+
+All users are encouraged to read the [overview of major changes][1]
+since release ${previous.ver}.
+
+For details of bug fixes, improvements, and other enhancements since
+the previous ${previous.ver} release, please check [release notes][2]
+and [changelog][3].
+
+
+Azure ABFS: Critical Stream Prefetch Fix
+----------------------------------------
+
+The ABFS connector has a critical bug fix
+https://issues.apache.org/jira/browse/HADOOP-18546:
+*ABFS. Disable purging list of in-progress reads in abfs stream close().*
+
+All users of the abfs connector in hadoop releases 3.3.2+ MUST either upgrade
+to this release or disable prefetching by setting
+`fs.azure.readaheadqueue.depth` to `0`.
+
+
+[1]: http://hadoop.apache.org/docs/r${ver}/index.html
+[2]: 
http://hadoop.apache.org/docs/r${ver}/hadoop-project-dist/hadoop-common/release/${ver}/RELEASENOTES.${ver}.html
+[3]: 
http://hadoop.apache.org/docs/r${ver}/hadoop-project-dist/hadoop-common/release/${ver}/CHANGELOG.${ver}.html
diff --git a/src/text/user-email-announcement.txt 
b/src/text/user-email-announcement.txt
index 535779c..745af87 100644
--- a/src/text/user-email-announcement.txt
+++ b/src/text/user-email-announcement.txt
@@ -3,7 +3,7 @@
 On behalf of the Apache Hadoop Project Management Committee, I am
 pleased to announce the release of Apache Hadoop ${ver}.
 
-${announcement.txt}
+${core-announcement.txt}
 
 Many thanks to everyone who helped in this release by supplying patches,
 reviewing them, helping get this release building and testing and


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org

Reply via email to