Repository: spark-website
Updated Branches:
  refs/heads/asf-site cea85ce67 -> eb97812f5


Updates to the release guide.

- Fix dangerous commands that would flood the ASF repo with requests.
- Add instruction for updating release KEYS.
- Fix the PyPI instructions (including link to the message on private).
- Add some more instructions for updating the web site.
- Remove some outdated instructions.
- Add SparkR instructions.

Author: Marcelo Vanzin <van...@cloudera.com>

Closes #116 from vanzin/rm-fixes.


Project: http://git-wip-us.apache.org/repos/asf/spark-website/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark-website/commit/eb97812f
Tree: http://git-wip-us.apache.org/repos/asf/spark-website/tree/eb97812f
Diff: http://git-wip-us.apache.org/repos/asf/spark-website/diff/eb97812f

Branch: refs/heads/asf-site
Commit: eb97812f59eb86afdfec7fa6351d2ef2b8f935d5
Parents: cea85ce
Author: Marcelo Vanzin <van...@cloudera.com>
Authored: Tue Jun 12 09:40:17 2018 -0700
Committer: Marcelo Vanzin <van...@cloudera.com>
Committed: Tue Jun 12 09:40:17 2018 -0700

----------------------------------------------------------------------
 README.md                 |  6 ++-
 release-process.md        | 89 +++++++++++++++++++--------------------
 site/mailing-lists.html   |  2 +-
 site/release-process.html | 95 ++++++++++++++++++++----------------------
 4 files changed, 92 insertions(+), 100 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/spark-website/blob/eb97812f/README.md
----------------------------------------------------------------------
diff --git a/README.md b/README.md
index 667519b..209e8f8 100644
--- a/README.md
+++ b/README.md
@@ -17,6 +17,9 @@ In addition to generating the site as HTML from the markdown 
files, jekyll can s
 a web server. To build the site and run a web server use the command `jekyll 
serve` which runs 
 the web server on port 4000, then visit the site at http://localhost:4000.
 
+Please make sure you always run `jekyll build` after testing your changes with 
`jekyll server`,
+otherwise you end up with broken links in a few places.
+
 ## Docs sub-dir
 
 The docs are not generated as part of the website. They are built separately 
for each release 
@@ -41,5 +44,4 @@ compile phase, use the following syntax:
 
 ## Merge PR
 
-To merge pull request, use the merge_pr.py script which also squash the 
commits.
-
+To merge pull request, use the `merge_pr.py` script which also squashes the 
commits.

http://git-wip-us.apache.org/repos/asf/spark-website/blob/eb97812f/release-process.md
----------------------------------------------------------------------
diff --git a/release-process.md b/release-process.md
index f25a429..e756cc0 100644
--- a/release-process.md
+++ b/release-process.md
@@ -42,7 +42,7 @@ standard Git branching mechanism and should be announced to 
the community once t
 created.
 
 It is also good to set up Jenkins jobs for the release branch once it is cut to
-ensure tests are passing. These are jobs like 
+ensure tests are passing. These are jobs like
 
https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Test/job/spark-branch-2.3-test-maven-hadoop-2.7/
 .
 Consult Josh Rosen and Shane Knapp for help with this. Also remember to add 
the newly-added jobs
 to the test dashboard at 
https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Test%20(Dashboard)/ .
@@ -54,7 +54,7 @@ last RC are marked as `Resolved` and has a `Target Versions` 
set to this release
 
 
 To track any issue with pending PR targeting this release, create a filter in 
JIRA with a query like this
-`project = SPARK AND "Target Version/s" = "12340470" AND status in (OPEN, "In 
Progress")`
+`project = SPARK AND "Target Version/s" = "12340470" AND status in (Open, 
Reopened, "In Progress")`
 
 
 For target version string value to use, find the numeric value corresponds to 
the release by looking into
@@ -95,7 +95,7 @@ Instead much of the same release logic can be accessed in 
`dev/create-release/re
 
 ```
 # Move dev/ to release/ when the voting is completed. See Finalize the Release 
below
-svn co "https://dist.apache.org/repos/dist/dev/spark"; svn-spark
+svn co --depth=files "https://dist.apache.org/repos/dist/dev/spark"; svn-spark
 # edit svn-spark/KEYS file
 svn ci --username $ASF_USERNAME --password "$ASF_PASSWORD" -m"Update KEYS"
 ```
@@ -134,13 +134,15 @@ move the artifacts into the release folder, they cannot 
be removed.**
 After the vote passes, to upload the binaries to Apache mirrors, you move the 
binaries from dev directory (this should be where they are voted) to release 
directory. This "moving" is the only way you can add stuff to the actual 
release directory.
 
 ```
-# Checkout the Spark directory in Apache distribution SVN "dev" repo
-$ svn co https://dist.apache.org/repos/dist/dev/spark/
-
 # Move the sub-directory in "dev" to the
 # corresponding directory in "release"
 $ export SVN_EDITOR=vim
 $ svn mv https://dist.apache.org/repos/dist/dev/spark/spark-1.1.1-rc2 
https://dist.apache.org/repos/dist/release/spark/spark-1.1.1
+
+# If you've added your signing key to the KEYS file, also update the release 
copy.
+svn co --depth=files "https://dist.apache.org/repos/dist/release/spark"; 
svn-spark
+curl "https://dist.apache.org/repos/dist/dev/spark/KEYS"; > svn-spark/KEYS
+(cd svn-spark && svn ci --username $ASF_USERNAME --password "$ASF_PASSWORD" 
-m"Update KEYS")
 ```
 
 Verify that the resources are present in <a 
href="https://www.apache.org/dist/spark/";>https://www.apache.org/dist/spark/</a>.
@@ -154,22 +156,28 @@ and the same under 
https://repository.apache.org/content/groups/maven-staging-gr
 
 <h4>Upload to PyPI</h4>
 
-Uploading to PyPI is done after the release has been uploaded to Apache. To 
get started, go to the <a href="https://pypi.python.org";>PyPI website</a> and 
log in with the spark-upload account (see the PMC mailing list for account 
permissions).
+You'll need the credentials for the `spark-upload` account, which can be found 
in
+<a 
href="https://lists.apache.org/thread.html/2789e448cd8a95361a3164b48f3f8b73a6d9d82aeb228bae2bc4dc7f@%3Cprivate.spark.apache.org%3E";>this
 message</a>
+(only visible to PMC members).
 
+The artifacts can be uploaded using <a 
href="https://pypi.python.org/pypi/twine";>twine</a>. Just run:
 
-Once you have logged in it is time to register the new release, on the <a 
href="https://pypi.python.org/pypi?%3Aaction=submit_form";>submitting package 
information</a> page by uploading the PKG-INFO file from inside the pyspark 
packaged artifact.
+```
+twine upload --repository-url https://upload.pypi.org/legacy/ 
pyspark-{version}.tar.gz pyspark-{version}.tar.gz.asc
+```
 
+Adjusting the command for the files that match the new release. If for some 
reason the twine upload
+is incorrect (e.g. http failure or other issue), you can rename the artifact to
+`pyspark-version.post0.tar.gz`, delete the old artifact from PyPI and 
re-upload.
 
-Once the release has been registered you can upload the artifacts
-to the <b>legacy</b> pypi interface, using <a 
href="https://pypi.python.org/pypi/twine";>twine</a>.
-If you don't have twine setup you will need to create a .pypirc file with the 
reository pointing to `https://upload.pypi.org/legacy/` and the same username 
and password for the spark-upload account.
 
-In the release directory run `twine upload -r legacy pyspark-version.tar.gz 
pyspark-version.tar.gz.asc`.
-If for some reason the twine upload is incorrect (e.g. http failure or other 
issue), you can rename the artifact to `pyspark-version.post0.tar.gz`, delete 
the old artifact from PyPI and re-upload.
+<h4>Publish to CRAN</h4>
 
+Publishing to CRAN is done using <a 
href="https://cran.r-project.org/submit.html";>this form</a>.
+Since it requires further manual steps, please also contact the <a 
href="mailto:priv...@spark.apache.org";>PMC</a>.
 
 
-<h4>Remove Old Releases from Mirror Network</h4>
+<h4>Remove Old Releases from Development Repository and Mirror Network</h4>
 
 Spark always keeps two releases in the mirror network: the most recent release 
on the current and
 previous branches. To delete older versions simply use svn rm. The 
`downloads.js` file in the
@@ -180,43 +188,23 @@ releases should be 1.1.1 and 1.0.2, but not 1.1.1 and 
1.1.0.
 $ svn rm https://dist.apache.org/repos/dist/release/spark/spark-1.1.0
 ```
 
-<h4>Update the Spark Apache Repository</h4>
-
-Check out the tagged commit for the release candidate that passed and apply 
the correct version tag.
+You should also delete the RC directories from the staging repository. For 
example:
 
 ```
-$ git checkout v1.1.1-rc2 # the RC that passed
-$ git tag v1.1.1
-$ git push apache v1.1.1
+svn rm https://dist.apache.org/repos/dist/dev/spark/v2.3.1-rc1-bin/ \
+  https://dist.apache.org/repos/dist/dev/spark/v2.3.1-rc1-docs/ \
+  -m"Removing RC artifacts."
 
-# Verify that the tag has been applied correctly
-# If so, remove the old tag
-$ git push apache :v1.1.1-rc2
-$ git tag -d v1.1.1-rc2
 ```
 
-Next, update remaining version numbers in the release branch. If you are doing 
a patch release,
-see the similar commit made after the previous release in that branch. For 
example, for branch 1.0,
-see <a 
href="https://github.com/apache/spark/commit/2a5514f7dcb9765b60cb772b97038cbbd1b58983";>this
 example commit</a>.
-
-In general, the rules are as follows:
-
-- `grep` through the repository to find such occurrences
-- References to the version just released. Upgrade them to next release 
version. If it is not a
-documentation related version (e.g. inside `spark/docs/` or inside 
`spark/python/epydoc.conf`),
-add `-SNAPSHOT` to the end.
-- References to the next version. Ensure these already have `-SNAPSHOT`.
+<h4>Update the Spark Apache Repository</h4>
 
-<!--
-<h4>Update the EC2 Scripts</h4>
+Check out the tagged commit for the release candidate that passed and apply 
the correct version tag.
 
-Upload the binary packages to the S3 bucket s3n://spark-related-packages (ask 
pwendell to do this). Then, change the init scripts in mesos/spark-ec2 
repository to pull new binaries (see this example commit).
-For Spark 1.1+, update branch v4+
-For Spark 1.1, update branch v3+
-For Spark 1.0, update branch v3+
-For Spark 0.9, update branch v2+
-You can audit the ec2 set-up by launching a cluster and running this audit 
script. Make sure you create cluster with default instance type (m1.xlarge).
--->
+```
+$ git tag v1.1.1 v1.1.1-rc2 # the RC that passed
+$ git push apache v1.1.1
+```
 
 <h4>Update the Spark Website</h4>
 
@@ -242,9 +230,12 @@ $ ln -s 1.1.1 latest
 ```
 
 Next, update the rest of the Spark website. See how the previous releases are 
documented
-(all the HTML file changes are generated by `jekyll`).
-In particular, update `documentation.md` to add link to `docs` for the 
previous release. Add
-the new release to `js/downloads.js`. Check `security.md` for anything to 
update.
+(all the HTML file changes are generated by `jekyll`). In particular:
+
+* update `_layouts/global.html` if the new release is the latest one
+* update `documentation.md` to add link to the docs for the new release
+* add the new release to `js/downloads.js`
+* check `security.md` for anything to update
 
 ```
 $ git add 1.1.1
@@ -259,6 +250,10 @@ pick the release version from the list, then click on 
"Release Notes". Copy this
 
 Then run `jekyll build` to update the `site` directory.
 
+After merging the change into the `asf-site` branch, you may need to create a 
follow-up empty
+commit to force synchronization between ASF's git and the web site, and also 
the github mirror.
+For some reason synchronization seems to not be reliable for this repository.
+
 On a related note, make sure the version is marked as released on JIRA. Go 
find the release page as above, eg.,
 `https://issues.apache.org/jira/projects/SPARK/versions/12340295`, and click 
the "Release" button on the right and enter the release date.
 

http://git-wip-us.apache.org/repos/asf/spark-website/blob/eb97812f/site/mailing-lists.html
----------------------------------------------------------------------
diff --git a/site/mailing-lists.html b/site/mailing-lists.html
index 5adb102..7a06e3b 100644
--- a/site/mailing-lists.html
+++ b/site/mailing-lists.html
@@ -12,7 +12,7 @@
 
   
     <meta http-equiv="refresh" content="0; url=/community.html">
-    <link rel="canonical" href="http://localhost:4000/community.html"; />
+    <link rel="canonical" href="https://spark.apache.org/community.html"; />
   
 
   

http://git-wip-us.apache.org/repos/asf/spark-website/blob/eb97812f/site/release-process.html
----------------------------------------------------------------------
diff --git a/site/release-process.html b/site/release-process.html
index 91acc20..a3b311d 100644
--- a/site/release-process.html
+++ b/site/release-process.html
@@ -248,7 +248,7 @@ standard Git branching mechanism and should be announced to 
the community once t
 created.</p>
 
 <p>It is also good to set up Jenkins jobs for the release branch once it is 
cut to
-ensure tests are passing. These are jobs like 
+ensure tests are passing. These are jobs like
 
https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Test/job/spark-branch-2.3-test-maven-hadoop-2.7/
 .
 Consult Josh Rosen and Shane Knapp for help with this. Also remember to add 
the newly-added jobs
 to the test dashboard at 
https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Test%20(Dashboard)/ 
.</p>
@@ -259,7 +259,7 @@ to the test dashboard at 
https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%
 last RC are marked as <code>Resolved</code> and has a <code>Target 
Versions</code> set to this release version.</p>
 
 <p>To track any issue with pending PR targeting this release, create a filter 
in JIRA with a query like this
-<code>project = SPARK AND "Target Version/s" = "12340470" AND status in (OPEN, 
"In Progress")</code></p>
+<code>project = SPARK AND "Target Version/s" = "12340470" AND status in (Open, 
Reopened, "In Progress")</code></p>
 
 <p>For target version string value to use, find the numeric value corresponds 
to the release by looking into
 an existing issue with that target version and click on the version (eg. find 
an issue targeting 2.2.1
@@ -296,7 +296,7 @@ At present the Jenkins jobs <em>SHOULD NOT BE USED</em> as 
they use a legacy sha
 </ul>
 
 <pre><code># Move dev/ to release/ when the voting is completed. See Finalize 
the Release below
-svn co "https://dist.apache.org/repos/dist/dev/spark"; svn-spark
+svn co --depth=files "https://dist.apache.org/repos/dist/dev/spark"; svn-spark
 # edit svn-spark/KEYS file
 svn ci --username $ASF_USERNAME --password "$ASF_PASSWORD" -m"Update KEYS"
 </code></pre>
@@ -338,13 +338,15 @@ move the artifacts into the release folder, they cannot 
be removed.</strong></p>
 
 <p>After the vote passes, to upload the binaries to Apache mirrors, you move 
the binaries from dev directory (this should be where they are voted) to 
release directory. This &#8220;moving&#8221; is the only way you can add stuff 
to the actual release directory.</p>
 
-<pre><code># Checkout the Spark directory in Apache distribution SVN "dev" repo
-$ svn co https://dist.apache.org/repos/dist/dev/spark/
-
-# Move the sub-directory in "dev" to the
+<pre><code># Move the sub-directory in "dev" to the
 # corresponding directory in "release"
 $ export SVN_EDITOR=vim
 $ svn mv https://dist.apache.org/repos/dist/dev/spark/spark-1.1.1-rc2 
https://dist.apache.org/repos/dist/release/spark/spark-1.1.1
+
+# If you've added your signing key to the KEYS file, also update the release 
copy.
+svn co --depth=files "https://dist.apache.org/repos/dist/release/spark"; 
svn-spark
+curl "https://dist.apache.org/repos/dist/dev/spark/KEYS"; &gt; svn-spark/KEYS
+(cd svn-spark &amp;&amp; svn ci --username $ASF_USERNAME --password 
"$ASF_PASSWORD" -m"Update KEYS")
 </code></pre>
 
 <p>Verify that the resources are present in <a 
href="https://www.apache.org/dist/spark/";>https://www.apache.org/dist/spark/</a>.
@@ -356,18 +358,25 @@ and the same under 
https://repository.apache.org/content/groups/maven-staging-gr
 
 <h4>Upload to PyPI</h4>
 
-<p>Uploading to PyPI is done after the release has been uploaded to Apache. To 
get started, go to the <a href="https://pypi.python.org";>PyPI website</a> and 
log in with the spark-upload account (see the PMC mailing list for account 
permissions).</p>
+<p>You&#8217;ll need the credentials for the <code>spark-upload</code> 
account, which can be found in
+<a 
href="https://lists.apache.org/thread.html/2789e448cd8a95361a3164b48f3f8b73a6d9d82aeb228bae2bc4dc7f@%3Cprivate.spark.apache.org%3E";>this
 message</a>
+(only visible to PMC members).</p>
+
+<p>The artifacts can be uploaded using <a 
href="https://pypi.python.org/pypi/twine";>twine</a>. Just run:</p>
 
-<p>Once you have logged in it is time to register the new release, on the <a 
href="https://pypi.python.org/pypi?%3Aaction=submit_form";>submitting package 
information</a> page by uploading the PKG-INFO file from inside the pyspark 
packaged artifact.</p>
+<pre><code>twine upload --repository-url https://upload.pypi.org/legacy/ 
pyspark-{version}.tar.gz pyspark-{version}.tar.gz.asc
+</code></pre>
+
+<p>Adjusting the command for the files that match the new release. If for some 
reason the twine upload
+is incorrect (e.g. http failure or other issue), you can rename the artifact to
+<code>pyspark-version.post0.tar.gz</code>, delete the old artifact from PyPI 
and re-upload.</p>
 
-<p>Once the release has been registered you can upload the artifacts
-to the <b>legacy</b> pypi interface, using <a 
href="https://pypi.python.org/pypi/twine";>twine</a>.
-If you don&#8217;t have twine setup you will need to create a .pypirc file 
with the reository pointing to <code>https://upload.pypi.org/legacy/</code> and 
the same username and password for the spark-upload account.</p>
+<h4>Publish to CRAN</h4>
 
-<p>In the release directory run <code>twine upload -r legacy 
pyspark-version.tar.gz pyspark-version.tar.gz.asc</code>.
-If for some reason the twine upload is incorrect (e.g. http failure or other 
issue), you can rename the artifact to 
<code>pyspark-version.post0.tar.gz</code>, delete the old artifact from PyPI 
and re-upload.</p>
+<p>Publishing to CRAN is done using <a 
href="https://cran.r-project.org/submit.html";>this form</a>.
+Since it requires further manual steps, please also contact the <a 
href="mailto:priv...@spark.apache.org";>PMC</a>.</p>
 
-<h4>Remove Old Releases from Mirror Network</h4>
+<h4>Remove Old Releases from Development Repository and Mirror Network</h4>
 
 <p>Spark always keeps two releases in the mirror network: the most recent 
release on the current and
 previous branches. To delete older versions simply use svn rm. The 
<code>downloads.js</code> file in the
@@ -377,44 +386,21 @@ releases should be 1.1.1 and 1.0.2, but not 1.1.1 and 
1.1.0.</p>
 <pre><code>$ svn rm 
https://dist.apache.org/repos/dist/release/spark/spark-1.1.0
 </code></pre>
 
-<h4>Update the Spark Apache Repository</h4>
+<p>You should also delete the RC directories from the staging repository. For 
example:</p>
 
-<p>Check out the tagged commit for the release candidate that passed and apply 
the correct version tag.</p>
-
-<pre><code>$ git checkout v1.1.1-rc2 # the RC that passed
-$ git tag v1.1.1
-$ git push apache v1.1.1
+<pre><code>svn rm https://dist.apache.org/repos/dist/dev/spark/v2.3.1-rc1-bin/ 
\
+  https://dist.apache.org/repos/dist/dev/spark/v2.3.1-rc1-docs/ \
+  -m"Removing RC artifacts."
 
-# Verify that the tag has been applied correctly
-# If so, remove the old tag
-$ git push apache :v1.1.1-rc2
-$ git tag -d v1.1.1-rc2
 </code></pre>
 
-<p>Next, update remaining version numbers in the release branch. If you are 
doing a patch release,
-see the similar commit made after the previous release in that branch. For 
example, for branch 1.0,
-see <a 
href="https://github.com/apache/spark/commit/2a5514f7dcb9765b60cb772b97038cbbd1b58983";>this
 example commit</a>.</p>
-
-<p>In general, the rules are as follows:</p>
-
-<ul>
-  <li><code>grep</code> through the repository to find such occurrences</li>
-  <li>References to the version just released. Upgrade them to next release 
version. If it is not a
-documentation related version (e.g. inside <code>spark/docs/</code> or inside 
<code>spark/python/epydoc.conf</code>),
-add <code>-SNAPSHOT</code> to the end.</li>
-  <li>References to the next version. Ensure these already have 
<code>-SNAPSHOT</code>.</li>
-</ul>
+<h4>Update the Spark Apache Repository</h4>
 
-<!--
-<h4>Update the EC2 Scripts</h4>
+<p>Check out the tagged commit for the release candidate that passed and apply 
the correct version tag.</p>
 
-Upload the binary packages to the S3 bucket s3n://spark-related-packages (ask 
pwendell to do this). Then, change the init scripts in mesos/spark-ec2 
repository to pull new binaries (see this example commit).
-For Spark 1.1+, update branch v4+
-For Spark 1.1, update branch v3+
-For Spark 1.0, update branch v3+
-For Spark 0.9, update branch v2+
-You can audit the ec2 set-up by launching a cluster and running this audit 
script. Make sure you create cluster with default instance type (m1.xlarge).
--->
+<pre><code>$ git tag v1.1.1 v1.1.1-rc2 # the RC that passed
+$ git push apache v1.1.1
+</code></pre>
 
 <h4>Update the Spark Website</h4>
 
@@ -439,9 +425,14 @@ $ ln -s 1.1.1 latest
 </code></pre>
 
 <p>Next, update the rest of the Spark website. See how the previous releases 
are documented
-(all the HTML file changes are generated by <code>jekyll</code>).
-In particular, update <code>documentation.md</code> to add link to 
<code>docs</code> for the previous release. Add
-the new release to <code>js/downloads.js</code>. Check 
<code>security.md</code> for anything to update.</p>
+(all the HTML file changes are generated by <code>jekyll</code>). In 
particular:</p>
+
+<ul>
+  <li>update <code>_layouts/global.html</code> if the new release is the 
latest one</li>
+  <li>update <code>documentation.md</code> to add link to the docs for the new 
release</li>
+  <li>add the new release to <code>js/downloads.js</code></li>
+  <li>check <code>security.md</code> for anything to update</li>
+</ul>
 
 <pre><code>$ git add 1.1.1
 $ git commit -m "Add docs for Spark 1.1.1"
@@ -455,6 +446,10 @@ pick the release version from the list, then click on 
&#8220;Release Notes&#8221
 
 <p>Then run <code>jekyll build</code> to update the <code>site</code> 
directory.</p>
 
+<p>After merging the change into the <code>asf-site</code> branch, you may 
need to create a follow-up empty
+commit to force synchronization between ASF&#8217;s git and the web site, and 
also the github mirror.
+For some reason synchronization seems to not be reliable for this 
repository.</p>
+
 <p>On a related note, make sure the version is marked as released on JIRA. Go 
find the release page as above, eg.,
 <code>https://issues.apache.org/jira/projects/SPARK/versions/12340295</code>, 
and click the &#8220;Release&#8221; button on the right and enter the release 
date.</p>
 


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org

Reply via email to