nsivabalan commented on code in PR #14296:
URL: https://github.com/apache/hudi/pull/14296#discussion_r2538477632


##########
release/release_guide.md:
##########
@@ -408,23 +408,43 @@ Set up a few environment variables to simplify Maven 
commands that follow. This
    1. This will deploy jar artifacts to the Apache Nexus Repository, which is 
the staging area for deploying jars to Maven Central.
    2. Review all staged artifacts (https://repository.apache.org/). They 
should contain all relevant parts for each module, including pom.xml, jar, test 
jar, source, test source, javadoc, etc. Carefully review any new artifacts.
    3. git checkout ${RELEASE_BRANCH}
-   4. ./scripts/release/deploy_staging_jars.sh 2>&1 | tee -a 
"/tmp/${RELEASE_VERSION}-${RC_NUM}.deploy.log"
-      1. when prompted for the passphrase, if you have multiple gpg keys in 
your keyring, make sure that you enter the right passphase corresponding to the 
same key (FINGERPRINT) as used while generating source release in step f.ii.
-         > If the prompt is not for the same key (by default the 
maven-gpg-plugin will pick up the first key in your keyring so that could be 
different), then add the following option to your ~/.gnupg/gpg.conf file
-      2. make sure your IP is not changing while uploading, otherwise it 
creates a different staging repo
-      3. Use a VPN if you can't prevent your IP from switching
-      4. after uploading, inspect the log to make sure all maven tasks said 
"BUILD SUCCESS"
-      5. In case you faced any issue while building `hudi-platform-service` or 
`hudi-metaserver-server` module, please ensure that you have docker daemon 
running. This is required to build `hudi-metaserver-server` module. See 
[checklist](#checklist-to-proceed-to-the-next-step). 
-   5. Review all staged artifacts by logging into Apache Nexus and clicking on 
"Staging Repositories" link on left pane. Then find a "open" entry for 
apachehudi
-   6. Ensure it contains all 2 (2.12 and 2.13) artifacts, mainly 
hudi-spark-bundle-2.12/2.13, hudi-spark3-bundle-2.12/2.13, 
hudi-spark-2.12/2.13, hudi-spark3-2.12/2.13, hudi-utilities-bundle_2.12/2.13 
and hudi-utilities_2.12/2.13.
+   4. Given that certain bundle jars are built by Java 11 (Flink 2.0 bundle) 
and Java 17 (Spark 4 bundle), multiple
+      scripts need to be run
+       1. For most modules with Java 8 build, run `export 
JAVA_HOME=$(/usr/libexec/java_home -v 1.8)` and
+          `/scripts/release/deploy_staging_jars.sh 2>&1 | tee -a 
"/tmp/${RELEASE_VERSION}-${RC_NUM}.deploy1.log"`
+           1. when prompted for the passphrase, if you have multiple gpg keys 
in your keyring, make sure that you enter
+              the right passphase corresponding to the same key (FINGERPRINT) 
as used while generating source release in
+              step 6.2.
+          > If the prompt is not for the same key (by default the 
maven-gpg-plugin will pick up the first key in your
+          keyring so that could be different), then add the following option 
to your ~/.gnupg/gpg.conf file
+           2. make sure your IP is not changing while uploading, otherwise it 
creates a different staging repo
+           3. Use a VPN if you can't prevent your IP from switching
+           4. after uploading, inspect the log to make sure all maven tasks 
said "BUILD SUCCESS"
+           5. In case you faced any issue while building 
`hudi-platform-service` or `hudi-metaserver-server` module,
+              please ensure that you have docker daemon running. This is 
required to build `hudi-metaserver-server`
+              module. See [checklist](#checklist-to-proceed-to-the-next-step).
+       2. Continue with Java 11 build, run `export 
JAVA_HOME=$(/usr/libexec/java_home -v 11)` and
+          `/scripts/release/deploy_staging_jars_java11.sh 2>&1 | tee -a 
"/tmp/${RELEASE_VERSION}-${RC_NUM}.deploy2.log"`
+       3. Continue with Java 17 build, run `export 
JAVA_HOME=$(/usr/libexec/java_home -v 17)` and
+          `/scripts/release/deploy_staging_jars_java17.sh 2>&1 | tee -a 
"/tmp/${RELEASE_VERSION}-${RC_NUM}.deploy3.log"`
+   5. Note that the artifacts from Java 11 and 17 builds are uploaded to 
separate staging repos. You need to manually

Review Comment:
   how do we manually upload to staging repo. can you add command line runbook 
or screenshot if its via UI. 



##########
release/release_guide.md:
##########
@@ -408,23 +408,43 @@ Set up a few environment variables to simplify Maven 
commands that follow. This
    1. This will deploy jar artifacts to the Apache Nexus Repository, which is 
the staging area for deploying jars to Maven Central.
    2. Review all staged artifacts (https://repository.apache.org/). They 
should contain all relevant parts for each module, including pom.xml, jar, test 
jar, source, test source, javadoc, etc. Carefully review any new artifacts.
    3. git checkout ${RELEASE_BRANCH}
-   4. ./scripts/release/deploy_staging_jars.sh 2>&1 | tee -a 
"/tmp/${RELEASE_VERSION}-${RC_NUM}.deploy.log"
-      1. when prompted for the passphrase, if you have multiple gpg keys in 
your keyring, make sure that you enter the right passphase corresponding to the 
same key (FINGERPRINT) as used while generating source release in step f.ii.
-         > If the prompt is not for the same key (by default the 
maven-gpg-plugin will pick up the first key in your keyring so that could be 
different), then add the following option to your ~/.gnupg/gpg.conf file
-      2. make sure your IP is not changing while uploading, otherwise it 
creates a different staging repo
-      3. Use a VPN if you can't prevent your IP from switching
-      4. after uploading, inspect the log to make sure all maven tasks said 
"BUILD SUCCESS"
-      5. In case you faced any issue while building `hudi-platform-service` or 
`hudi-metaserver-server` module, please ensure that you have docker daemon 
running. This is required to build `hudi-metaserver-server` module. See 
[checklist](#checklist-to-proceed-to-the-next-step). 
-   5. Review all staged artifacts by logging into Apache Nexus and clicking on 
"Staging Repositories" link on left pane. Then find a "open" entry for 
apachehudi
-   6. Ensure it contains all 2 (2.12 and 2.13) artifacts, mainly 
hudi-spark-bundle-2.12/2.13, hudi-spark3-bundle-2.12/2.13, 
hudi-spark-2.12/2.13, hudi-spark3-2.12/2.13, hudi-utilities-bundle_2.12/2.13 
and hudi-utilities_2.12/2.13.
+   4. Given that certain bundle jars are built by Java 11 (Flink 2.0 bundle) 
and Java 17 (Spark 4 bundle), multiple
+      scripts need to be run
+       1. For most modules with Java 8 build, run `export 
JAVA_HOME=$(/usr/libexec/java_home -v 1.8)` and
+          `/scripts/release/deploy_staging_jars.sh 2>&1 | tee -a 
"/tmp/${RELEASE_VERSION}-${RC_NUM}.deploy1.log"`
+           1. when prompted for the passphrase, if you have multiple gpg keys 
in your keyring, make sure that you enter
+              the right passphase corresponding to the same key (FINGERPRINT) 
as used while generating source release in
+              step 6.2.
+          > If the prompt is not for the same key (by default the 
maven-gpg-plugin will pick up the first key in your
+          keyring so that could be different), then add the following option 
to your ~/.gnupg/gpg.conf file
+           2. make sure your IP is not changing while uploading, otherwise it 
creates a different staging repo
+           3. Use a VPN if you can't prevent your IP from switching
+           4. after uploading, inspect the log to make sure all maven tasks 
said "BUILD SUCCESS"
+           5. In case you faced any issue while building 
`hudi-platform-service` or `hudi-metaserver-server` module,
+              please ensure that you have docker daemon running. This is 
required to build `hudi-metaserver-server`
+              module. See [checklist](#checklist-to-proceed-to-the-next-step).
+       2. Continue with Java 11 build, run `export 
JAVA_HOME=$(/usr/libexec/java_home -v 11)` and
+          `/scripts/release/deploy_staging_jars_java11.sh 2>&1 | tee -a 
"/tmp/${RELEASE_VERSION}-${RC_NUM}.deploy2.log"`
+       3. Continue with Java 17 build, run `export 
JAVA_HOME=$(/usr/libexec/java_home -v 17)` and
+          `/scripts/release/deploy_staging_jars_java17.sh 2>&1 | tee -a 
"/tmp/${RELEASE_VERSION}-${RC_NUM}.deploy3.log"`
+   5. Note that the artifacts from Java 11 and 17 builds are uploaded to 
separate staging repos. You need to manually
+      download those artifacts and upload them to the first staging repo so 
that all artifacts stay in the same repo.
+   6. Review all staged artifacts by logging into Apache Nexus and clicking on 
"Staging Repositories" link on left pane.
+      Then find a "open" entry for apachehudi
+   7. Ensure it contains all 2 (2.12 and 2.13) artifacts, mainly 
hudi-spark-bundle-2.12/2.13,

Review Comment:
   we should fix this to call out all spark versions right?
   lets also call out utilities slim bundle as well. 



##########
scripts/release/deploy_staging_jars.sh:
##########
@@ -46,27 +46,26 @@ declare -a ALL_VERSION_OPTS=(
 # hudi-utilities-bundle_2.13
 # hudi-utilities-slim-bundle_2.13
 # hudi-cli-bundle_2.13
-"-Dscala-2.13 -Dspark3.5 -pl 
hudi-spark-datasource/hudi-spark-common,hudi-spark-datasource/hudi-spark3.5.x,hudi-spark-datasource/hudi-spark,hudi-utilities,packaging/hudi-spark-bundle,packaging/hudi-utilities-bundle,packaging/hudi-utilities-slim-bundle,packaging/hudi-cli-bundle
 -am"
+"-T 1C -Dscala-2.13 -Dspark3.5 -pl 
hudi-spark-datasource/hudi-spark-common,hudi-spark-datasource/hudi-spark3.5.x,hudi-spark-datasource/hudi-spark,hudi-utilities,packaging/hudi-spark-bundle,packaging/hudi-utilities-bundle,packaging/hudi-utilities-slim-bundle,packaging/hudi-cli-bundle
 -am"

Review Comment:
   minor. I thought default is `-T 1C`. why do we need this explicitly? 
   not a blocking comment. just curious



##########
release/release_guide.md:
##########
@@ -408,23 +408,43 @@ Set up a few environment variables to simplify Maven 
commands that follow. This
    1. This will deploy jar artifacts to the Apache Nexus Repository, which is 
the staging area for deploying jars to Maven Central.
    2. Review all staged artifacts (https://repository.apache.org/). They 
should contain all relevant parts for each module, including pom.xml, jar, test 
jar, source, test source, javadoc, etc. Carefully review any new artifacts.
    3. git checkout ${RELEASE_BRANCH}
-   4. ./scripts/release/deploy_staging_jars.sh 2>&1 | tee -a 
"/tmp/${RELEASE_VERSION}-${RC_NUM}.deploy.log"
-      1. when prompted for the passphrase, if you have multiple gpg keys in 
your keyring, make sure that you enter the right passphase corresponding to the 
same key (FINGERPRINT) as used while generating source release in step f.ii.
-         > If the prompt is not for the same key (by default the 
maven-gpg-plugin will pick up the first key in your keyring so that could be 
different), then add the following option to your ~/.gnupg/gpg.conf file
-      2. make sure your IP is not changing while uploading, otherwise it 
creates a different staging repo
-      3. Use a VPN if you can't prevent your IP from switching
-      4. after uploading, inspect the log to make sure all maven tasks said 
"BUILD SUCCESS"
-      5. In case you faced any issue while building `hudi-platform-service` or 
`hudi-metaserver-server` module, please ensure that you have docker daemon 
running. This is required to build `hudi-metaserver-server` module. See 
[checklist](#checklist-to-proceed-to-the-next-step). 
-   5. Review all staged artifacts by logging into Apache Nexus and clicking on 
"Staging Repositories" link on left pane. Then find a "open" entry for 
apachehudi
-   6. Ensure it contains all 2 (2.12 and 2.13) artifacts, mainly 
hudi-spark-bundle-2.12/2.13, hudi-spark3-bundle-2.12/2.13, 
hudi-spark-2.12/2.13, hudi-spark3-2.12/2.13, hudi-utilities-bundle_2.12/2.13 
and hudi-utilities_2.12/2.13.
+   4. Given that certain bundle jars are built by Java 11 (Flink 2.0 bundle) 
and Java 17 (Spark 4 bundle), multiple
+      scripts need to be run
+       1. For most modules with Java 8 build, run `export 
JAVA_HOME=$(/usr/libexec/java_home -v 1.8)` and
+          `/scripts/release/deploy_staging_jars.sh 2>&1 | tee -a 
"/tmp/${RELEASE_VERSION}-${RC_NUM}.deploy1.log"`
+           1. when prompted for the passphrase, if you have multiple gpg keys 
in your keyring, make sure that you enter
+              the right passphase corresponding to the same key (FINGERPRINT) 
as used while generating source release in
+              step 6.2.
+          > If the prompt is not for the same key (by default the 
maven-gpg-plugin will pick up the first key in your
+          keyring so that could be different), then add the following option 
to your ~/.gnupg/gpg.conf file
+           2. make sure your IP is not changing while uploading, otherwise it 
creates a different staging repo
+           3. Use a VPN if you can't prevent your IP from switching
+           4. after uploading, inspect the log to make sure all maven tasks 
said "BUILD SUCCESS"
+           5. In case you faced any issue while building 
`hudi-platform-service` or `hudi-metaserver-server` module,
+              please ensure that you have docker daemon running. This is 
required to build `hudi-metaserver-server`
+              module. See [checklist](#checklist-to-proceed-to-the-next-step).
+       2. Continue with Java 11 build, run `export 
JAVA_HOME=$(/usr/libexec/java_home -v 11)` and
+          `/scripts/release/deploy_staging_jars_java11.sh 2>&1 | tee -a 
"/tmp/${RELEASE_VERSION}-${RC_NUM}.deploy2.log"`
+       3. Continue with Java 17 build, run `export 
JAVA_HOME=$(/usr/libexec/java_home -v 17)` and
+          `/scripts/release/deploy_staging_jars_java17.sh 2>&1 | tee -a 
"/tmp/${RELEASE_VERSION}-${RC_NUM}.deploy3.log"`
+   5. Note that the artifacts from Java 11 and 17 builds are uploaded to 
separate staging repos. You need to manually
+      download those artifacts and upload them to the first staging repo so 
that all artifacts stay in the same repo.
+   6. Review all staged artifacts by logging into Apache Nexus and clicking on 
"Staging Repositories" link on left pane.
+      Then find a "open" entry for apachehudi
+   7. Ensure it contains all 2 (2.12 and 2.13) artifacts, mainly 
hudi-spark-bundle-2.12/2.13,
+      hudi-spark3-bundle-2.12/2.13, hudi-spark-2.12/2.13, 
hudi-spark3-2.12/2.13, hudi-utilities-bundle_2.12/2.13 and
+      hudi-utilities_2.12/2.13.

Review Comment:
   L 437 needs to be revisited. we do not have support for scala2.11 or spark 2 
anymore. and spark3.0.3 etc.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to