Repository: spark
Updated Branches:
  refs/heads/branch-2.0 57dd4efcd -> a7e9e60df


[MINOR] fix typo in documents

## What changes were proposed in this pull request?

I use spell check tools checks typo in spark documents and fix them.

## How was this patch tested?

N/A

Author: WeichenXu <weichenxu...@outlook.com>

Closes #13538 from WeichenXu123/fix_doc_typo.

(cherry picked from commit 1e2c9311871968426e019164b129652fd6d0037f)
Signed-off-by: Sean Owen <so...@cloudera.com>


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/a7e9e60d
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/a7e9e60d
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/a7e9e60d

Branch: refs/heads/branch-2.0
Commit: a7e9e60df5c10a90c06883ea3203ec895b9b1f82
Parents: 57dd4ef
Author: WeichenXu <weichenxu...@outlook.com>
Authored: Tue Jun 7 13:29:27 2016 +0100
Committer: Sean Owen <so...@cloudera.com>
Committed: Tue Jun 7 13:29:36 2016 +0100

----------------------------------------------------------------------
 docs/graphx-programming-guide.md    | 2 +-
 docs/hardware-provisioning.md       | 2 +-
 docs/streaming-programming-guide.md | 2 +-
 3 files changed, 3 insertions(+), 3 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/spark/blob/a7e9e60d/docs/graphx-programming-guide.md
----------------------------------------------------------------------
diff --git a/docs/graphx-programming-guide.md b/docs/graphx-programming-guide.md
index 9dea9b5..81cf174 100644
--- a/docs/graphx-programming-guide.md
+++ b/docs/graphx-programming-guide.md
@@ -132,7 +132,7 @@ var graph: Graph[VertexProperty, String] = null
 
 Like RDDs, property graphs are immutable, distributed, and fault-tolerant.  
Changes to the values or
 structure of the graph are accomplished by producing a new graph with the 
desired changes.  Note
-that substantial parts of the original graph (i.e., unaffected structure, 
attributes, and indicies)
+that substantial parts of the original graph (i.e., unaffected structure, 
attributes, and indices)
 are reused in the new graph reducing the cost of this inherently functional 
data structure.  The
 graph is partitioned across the executors using a range of vertex partitioning 
heuristics.  As with
 RDDs, each partition of the graph can be recreated on a different machine in 
the event of a failure.

http://git-wip-us.apache.org/repos/asf/spark/blob/a7e9e60d/docs/hardware-provisioning.md
----------------------------------------------------------------------
diff --git a/docs/hardware-provisioning.md b/docs/hardware-provisioning.md
index 60ecb4f..bb6f616 100644
--- a/docs/hardware-provisioning.md
+++ b/docs/hardware-provisioning.md
@@ -22,7 +22,7 @@ Hadoop and Spark on a common cluster manager like 
[Mesos](running-on-mesos.html)
 
 * If this is not possible, run Spark on different nodes in the same local-area 
network as HDFS.
 
-* For low-latency data stores like HBase, it may be preferrable to run 
computing jobs on different
+* For low-latency data stores like HBase, it may be preferable to run 
computing jobs on different
 nodes than the storage system to avoid interference.
 
 # Local Disks

http://git-wip-us.apache.org/repos/asf/spark/blob/a7e9e60d/docs/streaming-programming-guide.md
----------------------------------------------------------------------
diff --git a/docs/streaming-programming-guide.md 
b/docs/streaming-programming-guide.md
index 78ae6a7..0a6a039 100644
--- a/docs/streaming-programming-guide.md
+++ b/docs/streaming-programming-guide.md
@@ -1259,7 +1259,7 @@ dstream.foreachRDD(sendRecord)
 </div>
 
 This is incorrect as this requires the connection object to be serialized and 
sent from the
-driver to the worker. Such connection objects are rarely transferrable across 
machines. This
+driver to the worker. Such connection objects are rarely transferable across 
machines. This
 error may manifest as serialization errors (connection object not 
serializable), initialization
 errors (connection object needs to be initialized at the workers), etc. The 
correct solution is
 to create the connection object at the worker.


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org

Reply via email to