Repository: spark
Updated Branches:
  refs/heads/master c42557be3 -> 40e080a68


Removed reference to incubation in Spark user docs.

Author: Reynold Xin <[email protected]>

Closes #2 from rxin/docs and squashes the following commits:

08bbd5f [Reynold Xin] Removed reference to incubation in Spark user docs.


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/40e080a6
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/40e080a6
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/40e080a6

Branch: refs/heads/master
Commit: 40e080a68a8fd025435e9ff84fa9280b4aba4dcf
Parents: c42557b
Author: Reynold Xin <[email protected]>
Authored: Thu Feb 27 21:13:22 2014 -0800
Committer: Patrick Wendell <[email protected]>
Committed: Thu Feb 27 21:13:22 2014 -0800

----------------------------------------------------------------------
 docs/README.md                  |  2 +-
 docs/_config.yml                |  4 ++--
 docs/_layouts/global.html       | 10 ----------
 docs/bagel-programming-guide.md |  2 +-
 docs/index.md                   | 12 ++++++------
 docs/java-programming-guide.md  |  2 +-
 docs/scala-programming-guide.md |  2 +-
 docs/spark-debugger.md          |  4 ++--
 8 files changed, 14 insertions(+), 24 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/spark/blob/40e080a6/docs/README.md
----------------------------------------------------------------------
diff --git a/docs/README.md b/docs/README.md
index cc09d6e..cac65d9 100644
--- a/docs/README.md
+++ b/docs/README.md
@@ -1,6 +1,6 @@
 Welcome to the Spark documentation!
 
-This readme will walk you through navigating and building the Spark 
documentation, which is included here with the Spark source code. You can also 
find documentation specific to release versions of Spark at 
http://spark.incubator.apache.org/documentation.html.
+This readme will walk you through navigating and building the Spark 
documentation, which is included here with the Spark source code. You can also 
find documentation specific to release versions of Spark at 
http://spark.apache.org/documentation.html.
 
 Read on to learn more about viewing documentation in plain text (i.e., 
markdown) or building the documentation yourself. Why build it yourself? So 
that you have the docs that corresponds to whichever version of Spark you 
currently have checked out of revision control.
 

http://git-wip-us.apache.org/repos/asf/spark/blob/40e080a6/docs/_config.yml
----------------------------------------------------------------------
diff --git a/docs/_config.yml b/docs/_config.yml
index 9e5a95f..aa5a5ad 100644
--- a/docs/_config.yml
+++ b/docs/_config.yml
@@ -3,10 +3,10 @@ markdown: kramdown
 
 # These allow the documentation to be updated with nerw releases
 # of Spark, Scala, and Mesos.
-SPARK_VERSION: 1.0.0-incubating-SNAPSHOT
+SPARK_VERSION: 1.0.0-SNAPSHOT
 SPARK_VERSION_SHORT: 1.0.0
 SCALA_BINARY_VERSION: "2.10"
 SCALA_VERSION: "2.10.3"
 MESOS_VERSION: 0.13.0
 SPARK_ISSUE_TRACKER_URL: https://spark-project.atlassian.net
-SPARK_GITHUB_URL: https://github.com/apache/incubator-spark
+SPARK_GITHUB_URL: https://github.com/apache/spark

http://git-wip-us.apache.org/repos/asf/spark/blob/40e080a6/docs/_layouts/global.html
----------------------------------------------------------------------
diff --git a/docs/_layouts/global.html b/docs/_layouts/global.html
index 7114e1f..ebb58e8 100755
--- a/docs/_layouts/global.html
+++ b/docs/_layouts/global.html
@@ -159,16 +159,6 @@
 
             <hr>-->
 
-            <footer>
-              <hr>
-              <p style="text-align: center; veritcal-align: middle; color: 
#999;">
-                Apache Spark is an effort undergoing incubation at the Apache 
Software Foundation.
-                <a href="http://incubator.apache.org";>
-                  <img style="margin-left: 20px;" src="img/incubator-logo.png" 
/>
-                </a>
-              </p>
-            </footer>
-
         </div> <!-- /container -->
 
         <script src="js/vendor/jquery-1.8.0.min.js"></script>

http://git-wip-us.apache.org/repos/asf/spark/blob/40e080a6/docs/bagel-programming-guide.md
----------------------------------------------------------------------
diff --git a/docs/bagel-programming-guide.md b/docs/bagel-programming-guide.md
index b070d8e..da6d0c9 100644
--- a/docs/bagel-programming-guide.md
+++ b/docs/bagel-programming-guide.md
@@ -108,7 +108,7 @@ _Example_
 
 ## Operations
 
-Here are the actions and types in the Bagel API. See 
[Bagel.scala](https://github.com/apache/incubator-spark/blob/master/bagel/src/main/scala/org/apache/spark/bagel/Bagel.scala)
 for details.
+Here are the actions and types in the Bagel API. See 
[Bagel.scala](https://github.com/apache/spark/blob/master/bagel/src/main/scala/org/apache/spark/bagel/Bagel.scala)
 for details.
 
 ### Actions
 

http://git-wip-us.apache.org/repos/asf/spark/blob/40e080a6/docs/index.md
----------------------------------------------------------------------
diff --git a/docs/index.md b/docs/index.md
index aa9c866..4eb297d 100644
--- a/docs/index.md
+++ b/docs/index.md
@@ -9,7 +9,7 @@ It also supports a rich set of higher-level tools including 
[Shark](http://shark
 
 # Downloading
 
-Get Spark by visiting the [downloads 
page](http://spark.incubator.apache.org/downloads.html) of the Apache Spark 
site. This documentation is for Spark version {{site.SPARK_VERSION}}.
+Get Spark by visiting the [downloads 
page](http://spark.apache.org/downloads.html) of the Apache Spark site. This 
documentation is for Spark version {{site.SPARK_VERSION}}.
 
 Spark runs on both Windows and UNIX-like systems (e.g. Linux, Mac OS). All you 
need to run it is to have `java` to installed on your system `PATH`, or the 
`JAVA_HOME` environment variable pointing to a Java installation.
 
@@ -96,7 +96,7 @@ For this version of Spark (0.8.1) Hadoop 2.2.x (or newer) 
users will have to bui
 * [Amazon EC2](ec2-scripts.html): scripts that let you launch a cluster on EC2 
in about 5 minutes
 * [Standalone Deploy Mode](spark-standalone.html): launch a standalone cluster 
quickly without a third-party cluster manager
 * [Mesos](running-on-mesos.html): deploy a private cluster using
-    [Apache Mesos](http://incubator.apache.org/mesos)
+    [Apache Mesos](http://mesos.apache.org)
 * [YARN](running-on-yarn.html): deploy Spark on top of Hadoop NextGen (YARN)
 
 **Other documents:**
@@ -110,20 +110,20 @@ For this version of Spark (0.8.1) Hadoop 2.2.x (or newer) 
users will have to bui
 
 **External resources:**
 
-* [Spark Homepage](http://spark.incubator.apache.org)
+* [Spark Homepage](http://spark.apache.org)
 * [Shark](http://shark.cs.berkeley.edu): Apache Hive over Spark
-* [Mailing Lists](http://spark.incubator.apache.org/mailing-lists.html): ask 
questions about Spark here
+* [Mailing Lists](http://spark.apache.org/mailing-lists.html): ask questions 
about Spark here
 * [AMP Camps](http://ampcamp.berkeley.edu/): a series of training camps at UC 
Berkeley that featured talks and
   exercises about Spark, Shark, Mesos, and more. 
[Videos](http://ampcamp.berkeley.edu/agenda-2012),
   [slides](http://ampcamp.berkeley.edu/agenda-2012) and 
[exercises](http://ampcamp.berkeley.edu/exercises-2012) are
   available online for free.
-* [Code Examples](http://spark.incubator.apache.org/examples.html): more are 
also available in the [examples 
subfolder](https://github.com/apache/incubator-spark/tree/master/examples/src/main/scala/)
 of Spark
+* [Code Examples](http://spark.apache.org/examples.html): more are also 
available in the [examples 
subfolder](https://github.com/apache/spark/tree/master/examples/src/main/scala/)
 of Spark
 * [Paper Describing 
Spark](http://www.cs.berkeley.edu/~matei/papers/2012/nsdi_spark.pdf)
 * [Paper Describing Spark 
Streaming](http://www.eecs.berkeley.edu/Pubs/TechRpts/2012/EECS-2012-259.pdf)
 
 # Community
 
-To get help using Spark or keep up with Spark development, sign up for the 
[user mailing list](http://spark.incubator.apache.org/mailing-lists.html).
+To get help using Spark or keep up with Spark development, sign up for the 
[user mailing list](http://spark.apache.org/mailing-lists.html).
 
 If you're in the San Francisco Bay Area, there's a regular [Spark 
meetup](http://www.meetup.com/spark-users/) every few weeks. Come by to meet 
the developers and other users.
 

http://git-wip-us.apache.org/repos/asf/spark/blob/40e080a6/docs/java-programming-guide.md
----------------------------------------------------------------------
diff --git a/docs/java-programming-guide.md b/docs/java-programming-guide.md
index 07732fa..5c73dbb 100644
--- a/docs/java-programming-guide.md
+++ b/docs/java-programming-guide.md
@@ -189,7 +189,7 @@ We hope to generate documentation with Java-style syntax in 
the future.
 # Where to Go from Here
 
 Spark includes several sample programs using the Java API in
-[`examples/src/main/java`](https://github.com/apache/incubator-spark/tree/master/examples/src/main/java/org/apache/spark/examples).
  You can run them by passing the class name to the
+[`examples/src/main/java`](https://github.com/apache/spark/tree/master/examples/src/main/java/org/apache/spark/examples).
  You can run them by passing the class name to the
 `bin/run-example` script included in Spark; for example:
 
     ./bin/run-example org.apache.spark.examples.JavaWordCount

http://git-wip-us.apache.org/repos/asf/spark/blob/40e080a6/docs/scala-programming-guide.md
----------------------------------------------------------------------
diff --git a/docs/scala-programming-guide.md b/docs/scala-programming-guide.md
index 506d3fa..9941273 100644
--- a/docs/scala-programming-guide.md
+++ b/docs/scala-programming-guide.md
@@ -365,7 +365,7 @@ res2: Int = 10
 
 # Where to Go from Here
 
-You can see some [example Spark 
programs](http://spark.incubator.apache.org/examples.html) on the Spark website.
+You can see some [example Spark 
programs](http://spark.apache.org/examples.html) on the Spark website.
 In addition, Spark includes several samples in `examples/src/main/scala`. Some 
of them have both Spark versions and local (non-parallel) versions, allowing 
you to see what had to be changed to make the program run on a cluster. You can 
run them using by passing the class name to the `bin/run-example` script 
included in Spark; for example:
 
     ./bin/run-example org.apache.spark.examples.SparkPi

http://git-wip-us.apache.org/repos/asf/spark/blob/40e080a6/docs/spark-debugger.md
----------------------------------------------------------------------
diff --git a/docs/spark-debugger.md b/docs/spark-debugger.md
index 11c51d5..891c2bf 100644
--- a/docs/spark-debugger.md
+++ b/docs/spark-debugger.md
@@ -2,7 +2,7 @@
 layout: global
 title: The Spark Debugger
 ---
-**Summary:** The Spark debugger provides replay debugging for deterministic 
(logic) errors in Spark programs. It's currently in development, but you can 
try it out in the [arthur 
branch](https://github.com/apache/incubator-spark/tree/arthur).
+**Summary:** The Spark debugger provides replay debugging for deterministic 
(logic) errors in Spark programs. It's currently in development, but you can 
try it out in the [arthur branch](https://github.com/apache/spark/tree/arthur).
 
 ## Introduction
 
@@ -19,7 +19,7 @@ For deterministic errors, debugging a Spark program is now as 
easy as debugging
 
 ## Approach
 
-As your Spark program runs, the slaves report key events back to the master -- 
for example, RDD creations, RDD contents, and uncaught exceptions. (A full list 
of event types is in 
[EventLogging.scala](https://github.com/apache/incubator-spark/blob/arthur/core/src/main/scala/spark/EventLogging.scala).)
 The master logs those events, and you can load the event log into the debugger 
after your program is done running.
+As your Spark program runs, the slaves report key events back to the master -- 
for example, RDD creations, RDD contents, and uncaught exceptions. (A full list 
of event types is in 
[EventLogging.scala](https://github.com/apache/spark/blob/arthur/core/src/main/scala/spark/EventLogging.scala).)
 The master logs those events, and you can load the event log into the debugger 
after your program is done running.
 
 _A note on nondeterminism:_ For fault recovery, Spark requires RDD 
transformations (for example, the function passed to `RDD.map`) to be 
deterministic. The Spark debugger also relies on this property, and it can also 
warn you if your transformation is nondeterministic. This works by checksumming 
the contents of each RDD and comparing the checksums from the original 
execution to the checksums after recomputing the RDD in the debugger.
 

Reply via email to