This is an automated email from the ASF dual-hosted git repository.

srowen pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/spark-website.git


The following commit(s) were added to refs/heads/asf-site by this push:
     new 92606b2  Further expand and update the merge and commit process for 
committers
92606b2 is described below

commit 92606b2e7849b9d743ef2a8176438142420a83e5
Author: Sean Owen <sean.o...@databricks.com>
AuthorDate: Thu Jan 10 14:29:08 2019 -0600

    Further expand and update the merge and commit process for committers
    
    Following up on 
https://github.com/apache/spark-website/commit/eb0aa14df472cff092b35ea1b894a0d880185561#r31886611
 with additional changes.
    
    Author: Sean Owen <sean.o...@databricks.com>
    
    Closes #166 from srowen/MoreCommitProcessUpdate.
---
 committers.md        | 67 ++++++++++++++++++++++++++++---------------------
 site/committers.html | 70 ++++++++++++++++++++++++++++++----------------------
 2 files changed, 80 insertions(+), 57 deletions(-)

diff --git a/committers.md b/committers.md
index 0eaad06..c3daf10 100644
--- a/committers.md
+++ b/committers.md
@@ -127,13 +127,41 @@ Git history for that code to see who reviewed patches 
before. You can do this us
 Changes pushed to the master branch on Apache cannot be removed; that is, we 
can't force-push to 
 it. So please don't add any test commits or anything like that, only real 
patches.
 
-All merges should be done using the 
-[dev/merge_spark_pr.py](https://github.com/apache/spark/blob/master/dev/merge_spark_pr.py)
 
-script, which squashes the pull request's changes into one commit. To use this 
script, you 
+<h4>Setting up Remotes</h4>
+
+To use the `merge_spark_pr.py` script described below, you 
 will need to add a git remote called `apache` at 
`https://github.com/apache/spark`, 
-as well as one called "apache-github" at `git://github.com/apache/spark`. For 
the `apache` repo, 
-you can authenticate using your ASF username and password. Ask 
`d...@spark.apache.org` if you have trouble with 
-this or want help doing your first merge.
+as well as one called `apache-github` at `git://github.com/apache/spark`.
+
+You will likely also have a remote `origin` pointing to your fork of Spark, and
+`upstream` pointing to the `apache/spark` GitHub repo. 
+
+If correct, your `git remote -v` should look like:
+
+```
+apache https://github.com/apache/spark.git (fetch)
+apache https://github.com/apache/spark.git (push)
+apache-github  git://github.com/apache/spark (fetch)
+apache-github  git://github.com/apache/spark (push)
+origin https://github.com/[your username]/spark.git (fetch)
+origin https://github.com/[your username]/spark.git (push)
+upstream       https://github.com/apache/spark.git (fetch)
+upstream       https://github.com/apache/spark.git (push)
+```
+
+For the `apache` repo, you will need to set up command-line authentication to 
GitHub. This may
+include setting up an SSH key and/or personal access token. See:
+
+- https://help.github.com/articles/connecting-to-github-with-ssh/
+- 
https://help.github.com/articles/creating-a-personal-access-token-for-the-command-line/
+
+Ask `d...@spark.apache.org` if you have trouble with these steps, or want help 
doing your first merge.
+
+<h4>Merge Script</h4>
+
+All merges should be done using the 
+[dev/merge_spark_pr.py](https://github.com/apache/spark/blob/master/dev/merge_spark_pr.py),
+which squashes the pull request's changes into one commit.
 
 The script is fairly self explanatory and walks you through steps and options 
interactively.
 
@@ -144,29 +172,12 @@ Then, in a separate window, modify the code and push a 
commit. Run `git rebase -
 You can verify the result is one change with `git log`. Then resume the script 
in the other window.
 
 Also, please remember to set Assignee on JIRAs where applicable when they are 
resolved. The script 
-can't do this automatically.
-Once a PR is merged please leave a comment on the PR stating which branch(es) 
it has been merged with.
+can do this automatically in most cases. However where the contributor is not 
yet a part of the
+Contributors group for the Spark project in ASF JIRA, it won't work until they 
are added. Ask
+an admin to add the person to Contributors at 
+https://issues.apache.org/jira/plugins/servlet/project-config/SPARK/roles .
 
-<!--
-<h3>Minimize use of MINOR, BUILD, and HOTFIX with no JIRA</h3>
-
-From pwendell at 
https://www.mail-archive.com/dev@spark.apache.org/msg09565.html:
-It would be great if people could create JIRA's for any and all merged pull 
requests. The reason is 
-that when patches get reverted due to build breaks or other issues, it is very 
difficult to keep 
-track of what is going on if there is no JIRA. 
-Here is a list of 5 patches we had to revert recently that didn't include a 
JIRA:
-    Revert "[MINOR] [BUILD] Use custom temp directory during build."
-    Revert "[SQL] [TEST] [MINOR] Uses a temporary log4j.properties in 
HiveThriftServer2Test to ensure expected logging behavior"
-    Revert "[BUILD] Always run SQL tests in master build."
-    Revert "[MINOR] [CORE] Warn users who try to cache RDDs with dynamic 
allocation on."
-    Revert "[HOT FIX] [YARN] Check whether `/lib` exists before listing its 
files"
-
-The cost overhead of creating a JIRA relative to other aspects of development 
is very small. 
-If it's really a documentation change or something small, that's okay.
-
-But anything affecting the build, packaging, etc. These all need to have a 
JIRA to ensure that 
-follow-up can be well communicated to all Spark developers.
--->
+Once a PR is merged please leave a comment on the PR stating which branch(es) 
it has been merged with.
 
 <h3>Policy on Backporting Bug Fixes</h3>
 
diff --git a/site/committers.html b/site/committers.html
index f419218..b3f50d3 100644
--- a/site/committers.html
+++ b/site/committers.html
@@ -532,13 +532,42 @@ Git history for that code to see who reviewed patches 
before. You can do this us
 <p>Changes pushed to the master branch on Apache cannot be removed; that is, 
we can&#8217;t force-push to 
 it. So please don&#8217;t add any test commits or anything like that, only 
real patches.</p>
 
-<p>All merges should be done using the 
-<a 
href="https://github.com/apache/spark/blob/master/dev/merge_spark_pr.py";>dev/merge_spark_pr.py</a>
 
-script, which squashes the pull request&#8217;s changes into one commit. To 
use this script, you 
+<h4>Setting up Remotes</h4>
+
+<p>To use the <code>merge_spark_pr.py</code> script described below, you 
 will need to add a git remote called <code>apache</code> at 
<code>https://github.com/apache/spark</code>, 
-as well as one called &#8220;apache-github&#8221; at 
<code>git://github.com/apache/spark</code>. For the <code>apache</code> repo, 
-you can authenticate using your ASF username and password. Ask 
<code>d...@spark.apache.org</code> if you have trouble with 
-this or want help doing your first merge.</p>
+as well as one called <code>apache-github</code> at 
<code>git://github.com/apache/spark</code>.</p>
+
+<p>You will likely also have a remote <code>origin</code> pointing to your 
fork of Spark, and
+<code>upstream</code> pointing to the <code>apache/spark</code> GitHub 
repo.</p>
+
+<p>If correct, your <code>git remote -v</code> should look like:</p>
+
+<pre><code>apache      https://github.com/apache/spark.git (fetch)
+apache https://github.com/apache/spark.git (push)
+apache-github  git://github.com/apache/spark (fetch)
+apache-github  git://github.com/apache/spark (push)
+origin https://github.com/[your username]/spark.git (fetch)
+origin https://github.com/[your username]/spark.git (push)
+upstream       https://github.com/apache/spark.git (fetch)
+upstream       https://github.com/apache/spark.git (push)
+</code></pre>
+
+<p>For the <code>apache</code> repo, you will need to set up command-line 
authentication to GitHub. This may
+include setting up an SSH key and/or personal access token. See:</p>
+
+<ul>
+  <li>https://help.github.com/articles/connecting-to-github-with-ssh/</li>
+  
<li>https://help.github.com/articles/creating-a-personal-access-token-for-the-command-line/</li>
+</ul>
+
+<p>Ask <code>d...@spark.apache.org</code> if you have trouble with these 
steps, or want help doing your first merge.</p>
+
+<h4>Merge Script</h4>
+
+<p>All merges should be done using the 
+<a 
href="https://github.com/apache/spark/blob/master/dev/merge_spark_pr.py";>dev/merge_spark_pr.py</a>,
+which squashes the pull request&#8217;s changes into one commit.</p>
 
 <p>The script is fairly self explanatory and walks you through steps and 
options interactively.</p>
 
@@ -549,29 +578,12 @@ Then, in a separate window, modify the code and push a 
commit. Run <code>git reb
 You can verify the result is one change with <code>git log</code>. Then resume 
the script in the other window.</p>
 
 <p>Also, please remember to set Assignee on JIRAs where applicable when they 
are resolved. The script 
-can&#8217;t do this automatically.
-Once a PR is merged please leave a comment on the PR stating which branch(es) 
it has been merged with.</p>
-
-<!--
-<h3>Minimize use of MINOR, BUILD, and HOTFIX with no JIRA</h3>
-
-From pwendell at 
https://www.mail-archive.com/dev@spark.apache.org/msg09565.html:
-It would be great if people could create JIRA's for any and all merged pull 
requests. The reason is 
-that when patches get reverted due to build breaks or other issues, it is very 
difficult to keep 
-track of what is going on if there is no JIRA. 
-Here is a list of 5 patches we had to revert recently that didn't include a 
JIRA:
-    Revert "[MINOR] [BUILD] Use custom temp directory during build."
-    Revert "[SQL] [TEST] [MINOR] Uses a temporary log4j.properties in 
HiveThriftServer2Test to ensure expected logging behavior"
-    Revert "[BUILD] Always run SQL tests in master build."
-    Revert "[MINOR] [CORE] Warn users who try to cache RDDs with dynamic 
allocation on."
-    Revert "[HOT FIX] [YARN] Check whether `/lib` exists before listing its 
files"
-
-The cost overhead of creating a JIRA relative to other aspects of development 
is very small. 
-If it's really a documentation change or something small, that's okay.
-
-But anything affecting the build, packaging, etc. These all need to have a 
JIRA to ensure that 
-follow-up can be well communicated to all Spark developers.
--->
+can do this automatically in most cases. However where the contributor is not 
yet a part of the
+Contributors group for the Spark project in ASF JIRA, it won&#8217;t work 
until they are added. Ask
+an admin to add the person to Contributors at 
+https://issues.apache.org/jira/plugins/servlet/project-config/SPARK/roles .</p>
+
+<p>Once a PR is merged please leave a comment on the PR stating which 
branch(es) it has been merged with.</p>
 
 <h3>Policy on Backporting Bug Fixes</h3>
 


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org

Reply via email to