Repository: spark
Updated Branches:
  refs/heads/branch-1.6 e8ae242f9 -> f13a3d1f7


[SPARK-12760][DOCS] inaccurate description for difference between local vs 
cluster mode in closure handling

Clarify that modifying a driver local variable won't have the desired effect in 
cluster modes, and may or may not work as intended in local mode

Author: Sean Owen <so...@cloudera.com>

Closes #10866 from srowen/SPARK-12760.

(cherry picked from commit aca2a0165405b9eba27ac5e4739e36a618b96676)
Signed-off-by: Sean Owen <so...@cloudera.com>


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/f13a3d1f
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/f13a3d1f
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/f13a3d1f

Branch: refs/heads/branch-1.6
Commit: f13a3d1f73d01bf167f3736b66222b1cb8f7a01b
Parents: e8ae242
Author: Sean Owen <so...@cloudera.com>
Authored: Sat Jan 23 11:45:12 2016 +0000
Committer: Sean Owen <so...@cloudera.com>
Committed: Sat Jan 23 11:45:21 2016 +0000

----------------------------------------------------------------------
 docs/programming-guide.md | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/spark/blob/f13a3d1f/docs/programming-guide.md
----------------------------------------------------------------------
diff --git a/docs/programming-guide.md b/docs/programming-guide.md
index c49b232..6c538e8 100644
--- a/docs/programming-guide.md
+++ b/docs/programming-guide.md
@@ -755,7 +755,7 @@ One of the harder things about Spark is understanding the 
scope and life cycle o
 
 #### Example
 
-Consider the naive RDD element sum below, which behaves completely differently 
depending on whether execution is happening within the same JVM. A common 
example of this is when running Spark in `local` mode (`--master = local[n]`) 
versus deploying a Spark application to a cluster (e.g. via spark-submit to 
YARN):
+Consider the naive RDD element sum below, which may behave differently 
depending on whether execution is happening within the same JVM. A common 
example of this is when running Spark in `local` mode (`--master = local[n]`) 
versus deploying a Spark application to a cluster (e.g. via spark-submit to 
YARN):
 
 <div class="codetabs">
 
@@ -803,11 +803,11 @@ print("Counter value: ", counter)
 
 #### Local vs. cluster modes
 
-The primary challenge is that the behavior of the above code is undefined. In 
local mode with a single JVM, the above code will sum the values within the RDD 
and store it in **counter**. This is because both the RDD and the variable 
**counter** are in the same memory space on the driver node.
+The behavior of the above code is undefined, and may not work as intended. To 
execute jobs, Spark breaks up the processing of RDD operations into tasks, each 
of which is executed by an executor. Prior to execution, Spark computes the 
task's **closure**. The closure is those variables and methods which must be 
visible for the executor to perform its computations on the RDD (in this case 
`foreach()`). This closure is serialized and sent to each executor.
 
-However, in `cluster` mode, what happens is more complicated, and the above 
may not work as intended. To execute jobs, Spark breaks up the processing of 
RDD operations into tasks - each of which is operated on by an executor. Prior 
to execution, Spark computes the **closure**. The closure is those variables 
and methods which must be visible for the executor to perform its computations 
on the RDD (in this case `foreach()`). This closure is serialized and sent to 
each executor. In `local` mode, there is only the one executors so everything 
shares the same closure. In other modes however, this is not the case and the 
executors running on separate worker nodes each have their own copy of the 
closure.
+The variables within the closure sent to each executor are now copies and 
thus, when **counter** is referenced within the `foreach` function, it's no 
longer the **counter** on the driver node. There is still a **counter** in the 
memory of the driver node but this is no longer visible to the executors! The 
executors only see the copy from the serialized closure. Thus, the final value 
of **counter** will still be zero since all operations on **counter** were 
referencing the value within the serialized closure.
 
-What is happening here is that the variables within the closure sent to each 
executor are now copies and thus, when **counter** is referenced within the 
`foreach` function, it's no longer the **counter** on the driver node. There is 
still a **counter** in the memory of the driver node but this is no longer 
visible to the executors! The executors only see the copy from the serialized 
closure. Thus, the final value of **counter** will still be zero since all 
operations on **counter** were referencing the value within the serialized 
closure.  
+In local mode, in some circumstances the `foreach` function will actually 
execute within the same JVM as the driver and will reference the same original 
**counter**, and may actually update it.
 
 To ensure well-defined behavior in these sorts of scenarios one should use an 
[`Accumulator`](#accumulators). Accumulators in Spark are used specifically to 
provide a mechanism for safely updating a variable when execution is split up 
across worker nodes in a cluster. The Accumulators section of this guide 
discusses these in more detail.  
 


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org

Reply via email to