This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/hadoop-release-support.git

commit b906d3bbba3f7017c7f6f97c5300f6d872341ac1
Author: Steve Loughran <ste...@cloudera.com>
AuthorDate: Wed Dec 21 19:00:07 2022 +0000

    HADOOP-18470.  rc work
---
 .gitignore                       |   2 +
 README.md                        |  82 ++++++++++++++++++---
 build.xml                        | 152 +++++++++++++++++++++++++++++++++------
 src/text/{email.txt => vote.txt} |  22 +++---
 4 files changed, 212 insertions(+), 46 deletions(-)

diff --git a/.gitignore b/.gitignore
index c0af19a..09891a9 100644
--- a/.gitignore
+++ b/.gitignore
@@ -1,2 +1,4 @@
 /target/
 /build.properties
+/.idea/
+/downloads/
diff --git a/README.md b/README.md
index a3f8b6d..fbb96f7 100644
--- a/README.md
+++ b/README.md
@@ -58,7 +58,7 @@ hboss.dir=/Users/stevel/Projects/hbasework/hbase-filesystem
 cloudstore.dir=/Users/stevel/Projects/cloudstore
 fs-api-shim.dir=/Users/stevel/Projects/Formats/fs-api-shim/
 hadoop.version=3.3.5
-git.commit.id=99c5802280b
+git.commit.id=3262495904d
 rc=0
 ```
 
@@ -83,12 +83,41 @@ This will take a while! look in target/incoming for progress
 ant scp-artifacts
 ```
 
-### Move to the release dir
+### Copy to the release dir
 
+Copies the files from `downloads/incoming/artifacts` to 
`downloads/hadoop-$version-$rc`'
+
+```bash
+ant copy-scp-artifacts release.dir.check
+```
+
+
+
+### Arm64 binaries
+
+If arm64 binaries are being created then they must be
+built on an arm docker image.
+Do not do this at the same time as building the x86 binaries
+because both builds will generate staging repositories on
+nexus. Instead: run the arm one first and drop its artifacts
+on nexus before doing the x86 one. That will ensure that
+it is the JAR files created on the x86 build are the ones
+publised on maven.
+
+The arm process is one of
+1. Create the full set of artifacts on an arm machine (macbook, cloud vm, ...)
+2. Drop the mvn repository from nexus
+3. Use the ant build to copy and rename the .tar.gz with the native binaries 
only
+4. Copy and rename the .asc file
+5. Generate new sha512 checksum file containing the new name.
+6. Move these files into the downloads/release dir
+
+To perform stages 3-6:
 ```bash
-ant move-scp-artifacts release.dir.check
+ant arm.copy.artifacts arm.release
 ```
 
+
 ### verify gpg signing
 
 ```bash
@@ -97,9 +126,18 @@ ant gpg.keys gpg.verify
 
 ### copy to a staging location in the hadoop SVN repository.
 
-When committed to svn it will be uploaded and accessible via an
+When committed to subversion it will be uploaded and accessible via a
 https://svn.apache.org URL.
 
+
+*do this after preparing the arm64 binaries*
+
+Final review of the release files
+```bash
+ant release.dir.check
+```
+
+Now stage
 ```bash
 ant stage
 ```
@@ -114,12 +152,22 @@ directly.
 
 This is not part of the tool. Can take a while...exit any VPN for extra speed.
 
+#### Manual
 ```bash
+cd $staging-dir
 svn update
 svn add <RC directory name>
-svn commit 
+svn commit
 ```
 
+#### Ant
+
+```bash
+ant stage-to-svn
+```
+
+
+
 ### tag the rc and push to github
 
 This isn't automated as it needs to be done in the source tree.
@@ -128,7 +176,7 @@ This isn't automated as it needs to be done in the source 
tree.
 ant print-tag-command
 ```
 
-### Prepare for the email
+### Prepare the maven repository
 
 1. Go to https://repository.apache.org/#stagingRepositories
 2. Find the hadoop repo for the RC
@@ -136,14 +184,14 @@ ant print-tag-command
 
 ### Generate the RC vote email
 
-Review/update template message in `src/email.txt`.
+Review/update template message in `src/text/vote.txt`.
 All ant properties referenced will be expanded if set.
 
 ```bash
 ant vote-message
 ```
 
-The message is printed and saved to the file `target/email.txt`
+The message is printed and saved to the file `target/vote.txt`
 
 *do not send it until you have validated the URLs resolve*
 
@@ -212,6 +260,13 @@ First, purge your maven repo
 ant purge-from-maven
 ```
 
+## client validator maven
+
+
+```bash
+ant mvn-test
+```
+
 ## Cloudstore
 
 [cloudstore](https://github.com/steveloughran/cloudstore).
@@ -274,8 +329,9 @@ ant hboss.build
 
 ## building the Hadoop site
 
-Set `hadoop.site.dir` to be the path to where the git
-clone of the ASF site repo is
+Set `hadoop.site.dir` to be the path of the
+local clone of the ASF site repository
+https://gitbox.apache.org/repos/asf/hadoop-site.git
 
 ```properties
 hadoop.site.dir=/Users/stevel/hadoop/release/hadoop-site
@@ -312,6 +368,12 @@ ln -s r3.3.5 stable3
 ls -l
 ```
 
+note, there are a lot of files, and if your shell has a prompt which shoes the 
git repo state, scanning can take a long time.
+Disable it, such as for fish:
+```fish
+set -e __fish_git_prompt_showdirtystate
+```
+
 Finally, *commit*
 
 ## Adding a global staging profile `asf-staging`
diff --git a/build.xml b/build.xml
index d6cd783..d956d2d 100644
--- a/build.xml
+++ b/build.xml
@@ -56,6 +56,12 @@
   <property name="previous.ver" value="3.3.4"/>
   <property name="release.branch" value="3.3"/>
 
+
+  <property name="git.commit.id" value="3262495904d"/>
+  <property name="jira.id" value="HADOOP-18470"/>
+
+
+
   <!-- for spark builds -->
   <property name="spark.version" value="3.4.0-SNAPSHOT"/>
   <!--  spark excludes hadoop-aws dependency and forces in their own
@@ -66,9 +72,9 @@
 
 
   <property name="release" value="hadoop-${hadoop.version}"/>
-  <property name="rc-dirname" value="${release}-${rc}"/>
-  <property name="release.dir" location="${downloads.dir}/${rc-dirname}"/>
-  <property name="staged.artifacts.dir" location="${staging.dir}/${rc.name}"/>
+  <property name="rc.dirname" value="${release}-${rc}"/>
+  <property name="release.dir" location="${downloads.dir}/${rc.dirname}"/>
+  <property name="staged.artifacts.dir" 
location="${staging.dir}/${rc.dirname}"/>
 
   <property name="tag.name" value="release-${rc.name}"/>
 <!--  <property name="nexus.staging.url"
@@ -79,6 +85,14 @@
   <property name="site.dir" 
location="${release.untar.dir}/site/r${hadoop.version}"/>
   <property name="release.bin.dir" location="${release.untar.dir}/bin"/>
   <property name="release.native.binaries" value="true"/>
+  <property name="arm.artifact.dir" 
location="${arm.hadoop.dir}/target/artifacts/" />
+  <property name="arm.dir" location="${downloads.dir}/arm" />
+  <property name="arm.binary.src" 
location="${arm.artifact.dir}/hadoop-${hadoop.version}.tar.gz" />
+  <property name="arm.binary" 
location="${arm.dir}/hadoop-arm64-${hadoop.version}.tar.gz" />
+  <property name="arm.binary.sha512" location="${arm.binary}.sha512" />
+  <property name="arm.binary.asc" location="${arm.binary}.asc" />
+  <property name="staging.commit.msg" value="${jira.id}. Hadoop ${rc.name} 
built from ${git.commit.id}" />
+
 
 
   <target name="init">
@@ -95,6 +109,15 @@
       <x executable="gpg"/>
     </presetdef>
 
+    <presetdef name="gpgv">
+      <gpg dir="${release.dir}">
+      </gpg>
+    </presetdef>
+
+    <presetdef name="svn">
+      <x executable="svn"/>
+    </presetdef>
+
 
     <macrodef name="require-dir">
       <attribute name="dir" />
@@ -133,6 +156,7 @@
     <echo>
       hadoop.version=${hadoop.version}
       rc=${rc}
+      jira.id=${jira.id}
       git.commit.id=${git.commit.id}
 
       Fetching and validating artifacts in ${release.dir}
@@ -218,13 +242,13 @@
   </target>
 
 
-  <target name="move-scp-artifacts" depends="init"
-    description="move the downloaded artifacts">
+  <target name="copy-scp-artifacts" depends="init"
+    description="copy the downloaded artifacts">
     <delete dir="${release.dir}"/>
-    <move
-      file="${incoming.dir}/artifacts"
-      tofile="${release.dir}"/>
-    <echo>Moved scp downloaded artifacts to ${release.dir}</echo>
+    <copy todir="${release.dir}">
+      <fileset dir="${incoming.dir}/artifacts" includes="*" />
+    </copy>
+    <echo>copies scp downloaded artifacts to ${release.dir}</echo>
   </target>
 
   <target name="release.dir.check" depends="init">
@@ -249,10 +273,7 @@
 
   <target name="gpg.verify" depends="release.dir.check"
     description="verify the downloaded artifacts">
-    <presetdef name="gpgv">
-      <gpg dir="${release.dir}">
-      </gpg>
-    </presetdef>
+
 
     <gpgv>
       <arg value="--verify"/>
@@ -290,7 +311,7 @@
 
     <fail message="unset: staging.dir" unless="staging.dir"/>
 
-    <echo>copying to ${staging.dir}</echo>
+    <echo>moving ${release.dir} to ${staging.dir}</echo>
     <move
       file="${release.dir}"
       todir="${staging.dir}"/>
@@ -298,10 +319,34 @@
       <arg value="-l"/>
       <arg value="${staging.dir}"/>
     </x>
+    <x executable="ls">
+      <arg value="-l"/>
+      <arg value="${staged.artifacts.dir}"/>
+    </x>
     <echo>
-      Now go to the staging dir and add/commit
-
+      Now go to the staging dir ${staging.dir} and add/commit the files.
     </echo>
+  </target>
+
+  <target name="stage-to-svn"
+    description="stage the RC into svn"
+    depends="init">
+    <fail unless="jira.id"/>
+    <fail unless="git.commit.id"/>
+
+    <svn dir="${staging.dir}">
+      <arg value="update" />
+    </svn>
+    <svn dir="${staging.dir}">
+      <arg value="add" />
+      <arg value="${staged.artifacts.dir}" />
+    </svn>
+    <echo>Comitting with message ${staging.commit.msg}. Please wait</echo>
+    <svn dir="${staging.dir}">
+      <arg value="commit" />
+      <arg value="-m" />
+      <arg value="${staging.commit.msg}" />
+    </svn>
 
   </target>
 
@@ -310,10 +355,20 @@
     depends="init">
     <require p="git.commit.id"/>
     <echo>
-      command to tag the commit is
-
+      # command to tag the commit
       git tag -s ${tag.name} -m "Release candidate ${rc.name}" ${git.commit.id}
+
+      # how to verify it
+      git tag -v ${tag.name}
+
+      # how to view the log to make sure it really is the right commit
+      git log tags/${tag.name}
+
+      # how to push to apache
       git push apache ${tag.name}
+
+      # if needed, how to delete it from apache
+      git push --delete apache ${tag.name}
     </echo>
   </target>
 
@@ -331,13 +386,13 @@
     </fail>
 
     <loadfile property="message.txt"
-      srcFile="src/text/email.txt">
+      srcFile="src/text/vote.txt">
       <filterchain>
         <expandproperties/>
       </filterchain>
     </loadfile>
     <property name="message.out"
-      location="${target}/email.txt"/>
+      location="${target}/vote.txt"/>
 
     <echo>${message.txt}</echo>
     <echo file="${message.out}">${message.txt}</echo>
@@ -768,9 +823,6 @@ Message is in file ${message.out}
       <fileset 
file="${site.dir}/hadoop-mapreduce-client/hadoop-mapreduce-client-core/jdiff/xml/Apache_Hadoop_MapReduce_Core_${ver}.xml"/>
       <fileset 
file="${site.dir}/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/jdiff/xml/Apache_Hadoop_MapReduce_JobClient_${ver}.xml"/>
     </copy>
-
-
-
   </target>
 
   <target name="release.site.prepare"
@@ -840,4 +892,58 @@ Message is in file ${message.out}
   </target>
 
 
+
+  <!--  copy the arm binaries into downloads/arm with their final filenames -->
+  <target name="arm.copy.artifacts" depends="init"
+      description="copy the arm binary and .asc files">
+    <delete dir="${arm.dir}" />
+    <mkdir dir="${arm.dir}" />
+    <copy file="${arm.binary.src}" tofile="${arm.binary}" />
+    <copy file="${arm.binary.src}.asc" tofile="${arm.binary.asc}" />
+    <x executable="ls">
+      <arg value="-l"/>
+      <arg value="${arm.dir}"/>
+    </x>
+  </target>
+
+<!--
+shasum -a 512 hadoop-3.3.5-arm64.tar.gz > hadoop-3.3.5-arm64.tar.gz.sha512
+gpg - -detach-sign -a  hadoop-arm64-3.3.5.tar.gz
+
+-->
+  <target name="arm.sign.artifacts" depends="init" >
+    <delete file="${arm.binary.sha512}" />
+    <checksum
+      algorithm="SHA-512"
+      fileext=".sha512"
+      file="${arm.binary}"
+      pattern="SHA512 ({1}) = {0}"
+      forceoverwrite="true"
+      />
+    <loadfile srcfile="${arm.binary.sha512}" property="arm.sha"/>
+    <echo> contents of ${arm.binary.sha512}
+${arm.sha}
+    </echo>
+<!-- not used because the original can be done.
+    <gpg dir="${arm.dir}">
+      <arg value="&#45;&#45;detach-sign" />
+      <arg value="-a" />
+      <arg value="${arm.binary}" />
+    </gpg>
+    -->
+    <loadfile srcfile="${arm.binary.asc}" property="arm.asc"/>
+    <echo> contents of ${arm.binary.asc}
+${arm.asc}
+    </echo>
+
+  </target>
+
+  <target name="arm.release" depends="arm.sign.artifacts"
+    description="prepare the arm artifacts and copy into the release dir">
+    <copy todir="${release.dir}" overwrite="true">
+      <fileset dir="${arm.dir}" includes="hadoop-arm64-*" />
+    </copy>
+  </target>
+
+
 </project>
diff --git a/src/text/email.txt b/src/text/vote.txt
similarity index 65%
rename from src/text/email.txt
rename to src/text/vote.txt
index 94b53b8..3506877 100644
--- a/src/text/email.txt
+++ b/src/text/vote.txt
@@ -1,16 +1,14 @@
 [VOTE] Release Apache Hadoop ${hadoop.version}
-Smoke test release release Apache Hadoop ${hadoop.version}
 
-Mukund and I have put together a release candidate (${rc}) for Hadoop 
${hadoop.version}.
+Apache Hadoop ${hadoop.version}
 
-This isn't quite ready for a vote as we know there are a couple of fixes to go 
in
-(one for abfs, one for hadoop-hdfs-nfs).
+Mukund and I have put together a release candidate (${rc}) for Hadoop 
${hadoop.version}.
 
 What we would like is for anyone who can to verify the tarballs, especially
 anyone who can try the arm64 binaries as we want to include them too.
 
 The RC is available at:
-https://dist.apache.org/repos/dist/dev/hadoop/${rc-dirname}/
+https://dist.apache.org/repos/dist/dev/hadoop/${rc.dirname}/
 
 The git tag is ${tag.name}, commit ${git.commit.id}
 
@@ -20,27 +18,25 @@ ${nexus.staging.url}
 You can find my public key at:
 https://dist.apache.org/repos/dist/release/hadoop/common/KEYS
 
-Mukund doesn't have one as gpg refuses to work on his laptop.
-
 Change log
-https://dist.apache.org/repos/dist/dev/hadoop/${rc-dirname}/CHANGELOG.md
+https://dist.apache.org/repos/dist/dev/hadoop/${rc.dirname}/CHANGELOG.md
 
 Release notes
-https://dist.apache.org/repos/dist/dev/hadoop/${rc-dirname}/RELEASENOTES.md
+https://dist.apache.org/repos/dist/dev/hadoop/${rc.dirname}/RELEASENOTES.md
 
 This is off branch-3.3 and is the first big release since 3.3.2.
 
-See the release notes for details.
-
-Key changes
+Key changes include
 
 * Big update of dependencies to try and keep those reports of
   transitive CVEs under control -both genuine and false positive.
+* HDFS RBF enhancements
+* Critical fix to ABFS input stream prefetching for correct reading.
 * Vectored IO API for all FSDataInputStream implementations, with
   high-performance versions for file:// and s3a:// filesystems.
   file:// through java native io
   s3a:// parallel GET requests.
-* This release includes arm64 binaries. Please can anyone with
+* This release includes Arm64 binaries. Please can anyone with
   compatible systems validate these.
 
 


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org

Reply via email to