Author: matei
Date: Wed Jun 4 20:18:25 2014
New Revision: 1600486
URL: http://svn.apache.org/r1600486
Log:
website tweaks: release note links and scaling FAQ
Modified:
spark/faq.md
spark/releases/_posts/2014-05-30-spark-release-1-0-0.md
spark/site/faq.html
spark/site/releases/spark-release-1-0-0.html
Modified: spark/faq.md
URL:
http://svn.apache.org/viewvc/spark/faq.md?rev=1600486&r1=1600485&r2=1600486&view=diff
==============================================================================
--- spark/faq.md (original)
+++ spark/faq.md Wed Jun 4 20:18:25 2014
@@ -22,8 +22,8 @@ streaming, interactive queries, and mach
<p class="question">Which languages does Spark support?</p>
<p class="answer">Spark supports Scala, Java and Python.</p>
-<p class="question">Does Spark require modified versions of Scala or
Python?</p>
-<p class="answer">No. Spark requires no changes to Scala or compiler plugins.
The Python API uses the standard CPython implementation, and can call into
existing C libraries for Python such as NumPy.</p>
+<p class="question">How large a cluster can Spark scale to?</p>
+<p class="answer">We are aware of multiple deployments on over 1000 nodes.</p>
<p class="question">What happens when a cached dataset does not fit in
memory?</p>
<p class="answer">Spark can either spill it to disk or recompute the
partitions that don't fit in RAM each time they are requested. By default, it
uses recomputation, but you can set a dataset's <a
href="{{site.url}}docs/latest/scala-programming-guide.html#rdd-persistence">storage
level</a> to <code>MEMORY_AND_DISK</code> to avoid this. </p>
@@ -39,6 +39,9 @@ streaming, interactive queries, and mach
<p class="question">How can I access data in S3?</p>
<p class="answer">Use the <code>s3n://</code> URI scheme
(<code>s3n://bucket/path</code>). You will also need to set your Amazon
security credentials, either by setting the environment variables
<code>AWS_ACCESS_KEY_ID</code> and <code>AWS_SECRET_ACCESS_KEY</code> before
your program runs, or by setting <code>fs.s3.awsAccessKeyId</code> and
<code>fs.s3.awsSecretAccessKey</code> in
<code>SparkContext.hadoopConfiguration</code>.</p>
+<p class="question">Does Spark require modified versions of Scala or
Python?</p>
+<p class="answer">No. Spark requires no changes to Scala or compiler plugins.
The Python API uses the standard CPython implementation, and can call into
existing C libraries for Python such as NumPy.</p>
+
<p class="question">What are good resources for learning Scala?</p>
<p class="answer">Check out <a
href="http://www.artima.com/scalazine/articles/steps.html">First Steps to
Scala</a> for a quick introduction, the <a
href="http://www.scala-lang.org/docu/files/ScalaTutorial.pdf">Scala tutorial
for Java programmers</a>, or the free online book <a
href="http://www.artima.com/pins1ed/">Programming in Scala</a>. Scala is easy
to transition to if you have Java experience or experience in a similarly
high-level language (e.g. Ruby).</p>
Modified: spark/releases/_posts/2014-05-30-spark-release-1-0-0.md
URL:
http://svn.apache.org/viewvc/spark/releases/_posts/2014-05-30-spark-release-1-0-0.md?rev=1600486&r1=1600485&r2=1600486&view=diff
==============================================================================
--- spark/releases/_posts/2014-05-30-spark-release-1-0-0.md (original)
+++ spark/releases/_posts/2014-05-30-spark-release-1-0-0.md Wed Jun 4 20:18:25
2014
@@ -11,7 +11,7 @@ meta:
_wpas_done_all: '1'
---
-Spark 1.0.0 is a major release marking the start of the 1.X line. This release
brings both a variety of new features and strong API compatibility guarantees
throughout the 1.X line. Spark 1.0 adds a new major component, [Spark
SQL]({{site.url}}docs/1.0.0/sql-programming-guide.html), for loading and
manipulating structured data in Spark. It includes major extensions to all of
Sparkâs existing standard libraries
([ML]({{site.url}}docs/1.0.0/mllib-guide.html),
[Streaming]({{site.url}}docs/1.0.0/streaming-programming-guide.html), and
[GraphX]({{site.url}}docs/1.0.0/graphx-programming-guide.html)) while also
enhancing language support in Java and Python. Finally, Spark 1.0 brings
operational improvements including full support for the Hadoop/YARN security
model and a unified submission process for all supported cluster managers.
+Spark 1.0.0 is a major release marking the start of the 1.X line. This release
brings both a variety of new features and strong API compatibility guarantees
throughout the 1.X line. Spark 1.0 adds a new major component, [Spark
SQL]({{site.url}}docs/latest/sql-programming-guide.html), for loading and
manipulating structured data in Spark. It includes major extensions to all of
Sparkâs existing standard libraries
([ML]({{site.url}}docs/latest/mllib-guide.html),
[Streaming]({{site.url}}docs/latest/streaming-programming-guide.html), and
[GraphX]({{site.url}}docs/latest/graphx-programming-guide.html)) while also
enhancing language support in Java and Python. Finally, Spark 1.0 brings
operational improvements including full support for the Hadoop/YARN security
model and a unified submission process for all supported cluster managers.
You can download Spark 1.0.0 as either a
<a href="http://d3kbcqa49mib13.cloudfront.net/spark-1.0.0.tgz"
onClick="trackOutboundLink(this, 'Release Download Links',
'cloudfront_spark-1.0.0.tgz'); return false;">source package</a>
@@ -28,22 +28,22 @@ Spark 1.0.0 is the first release in the
For users running in secured Hadoop environments, Spark now integrates with
the Hadoop/YARN security model. Spark will authenticate job submission,
securely transfer HDFS credentials, and authenticate communication between
components.
### Operational and Packaging Improvements
-This release significantly simplifies the process of bundling and submitting a
Spark application. A new [spark-submit
tool]({{site.url}}docs/1.0.0/submitting-applications.html) allows users to
submit an application to any Spark cluster, including local clusters, Mesos, or
YARN, through a common process. The documentation for bundling Spark
applications has been substantially expanded. Weâve also added a history
server for Sparkâs web UI, allowing users to view Spark application data
after individual applications are finished.
+This release significantly simplifies the process of bundling and submitting a
Spark application. A new [spark-submit
tool]({{site.url}}docs/latest/submitting-applications.html) allows users to
submit an application to any Spark cluster, including local clusters, Mesos, or
YARN, through a common process. The documentation for bundling Spark
applications has been substantially expanded. Weâve also added a history
server for Sparkâs web UI, allowing users to view Spark application data
after individual applications are finished.
### Spark SQL
-This release introduces [Spark
SQL]({{site.url}}docs/1.0.0/sql-programming-guide.html) as a new alpha
component. Spark SQL provides support for loading and manipulating structured
data in Spark, either from external structured data sources (currently Hive and
Parquet) or by adding a schema to an existing RDD. Spark SQLâs API
interoperates with the RDD data model, allowing users to interleave Spark code
with SQL statements. Under the hood, Spark SQL uses the Catalyst optimizer to
choose an efficient execution plan, and can automatically push predicates into
storage formats like Parquet. In future releases, Spark SQL will also provide a
common API to other storage systems.
+This release introduces [Spark
SQL]({{site.url}}docs/latest/sql-programming-guide.html) as a new alpha
component. Spark SQL provides support for loading and manipulating structured
data in Spark, either from external structured data sources (currently Hive and
Parquet) or by adding a schema to an existing RDD. Spark SQLâs API
interoperates with the RDD data model, allowing users to interleave Spark code
with SQL statements. Under the hood, Spark SQL uses the Catalyst optimizer to
choose an efficient execution plan, and can automatically push predicates into
storage formats like Parquet. In future releases, Spark SQL will also provide a
common API to other storage systems.
### MLlib Improvements
-In 1.0.0, Sparkâs MLlib adds support for sparse feature vectors in Scala,
Java, and Python. It takes advantage of sparsity in both storage and
computation in linear methods, k-means, and naive Bayes. In addition, this
release adds several new algorithms: scalable decision trees for both
classification and regression, distributed matrix algorithms including SVD and
PCA, model evaluation functions, and L-BFGS as an optimization primitive. The
programming guide and code examples for MLlib have also been greatly expanded.
+In 1.0.0, Sparkâs MLlib adds support for sparse feature vectors in Scala,
Java, and Python. It takes advantage of sparsity in both storage and
computation in linear methods, k-means, and naive Bayes. In addition, this
release adds several new algorithms: scalable decision trees for both
classification and regression, distributed matrix algorithms including SVD and
PCA, model evaluation functions, and L-BFGS as an optimization primitive. The
[MLlib programming guide]({{site.url}}docs/latest/mllib-guide.html) and code
examples have also been greatly expanded.
### GraphX and Streaming Improvements
In addition to usability and maintainability improvements, GraphX in Spark 1.0
brings substantial performance boosts in graph loading, edge reversal, and
neighborhood computation. These operations now require less communication and
produce simpler RDD graphs. Sparkâs Streaming module has added performance
optimizations for stateful stream transformations, along with improved Flume
support, and automated state cleanup for long running jobs.
### Extended Java and Python Support
-Spark 1.0 adds support for Java 8 [new lambda
syntax](http://www.oracle.com/webfolder/technetwork/tutorials/obe/java/Lambda-QuickStart/index.html#section2)
in its Java bindings. Java 8 supports a concise syntax for writing anonymous
functions, similar to the closure syntax in Scala and Python. This change
requires small changes for users of the current Java API, which are noted in
the documentation. Sparkâs Python API has been extended to support several
new functions. Weâve also included several stability improvements in the
Python API, particularly for large datasets. PySpark now supports running on
YARN as well.
+Spark 1.0 adds support for Java 8 [new lambda
syntax](http://docs.oracle.com/javase/tutorial/java/javaOO/lambdaexpressions.html)
in its Java bindings. Java 8 supports a concise syntax for writing anonymous
functions, similar to the closure syntax in Scala and Python. This change
requires small changes for users of the current Java API, which are noted in
the documentation. Sparkâs Python API has been extended to support several
new functions. Weâve also included several stability improvements in the
Python API, particularly for large datasets. PySpark now supports running on
YARN as well.
### Documentation
-Sparkâs programming guide has been significantly expanded to centrally cover
all supported languages and discuss more operators and aspects of the
development life cycle. The MLlib guide has also been expanded with
significantly more detail and examples for each algorithm, while documents on
configuration, YARN and Mesos have also been revamped.
+Spark's [programming guide]({{site.url}}docs/latest/programming-guide.html)
has been significantly expanded to centrally cover all supported languages and
discuss more operators and aspects of the development life cycle. The [MLlib
guide]({{site.url}}docs/latest/mllib-guide.html) has also been expanded with
significantly more detail and examples for each algorithm, while documents on
configuration, YARN and Mesos have also been revamped.
### Smaller Changes
- PySpark now works with more Python versions than before -- Python 2.6+
instead of 2.7+, and NumPy 1.4+ instead of 1.7+.
@@ -52,12 +52,12 @@ Sparkâs programming guide has been
- Support for off-heap storage in Tachyon has been added via a special build
target.
- Datasets persisted with `DISK_ONLY` now write directly to disk,
significantly improving memory usage for large datasets.
- Intermediate state created during a Spark job is now garbage collected when
the corresponding RDDs become unreferenced, improving performance.
-- Spark now includes a [Javadoc
version]({{site.url}}docs/1.0.0/api/java/index.html) of all its API docs and a
[unified Scaladoc]({{site.url}}docs/1.0.0/api/scala/index.html) for all modules.
+- Spark now includes a [Javadoc
version]({{site.url}}docs/latest/api/java/index.html) of all its API docs and a
[unified Scaladoc]({{site.url}}docs/latest/api/scala/index.html) for all
modules.
- A new SparkContext.wholeTextFiles method lets you operate on small text
files as individual records.
### Migrating to Spark 1.0
-While most of the Spark API remains the same as in 0.x versions, a few changes
have been made for long-term flexibility, especially in the Java API (to
support Java 8 lambdas). The documentation includes [migration
information]({{site.url}}docs/1.0.0/programming-guide.html#migrating-from-pre-10-versions-of-spark)
to upgrade your applications.
+While most of the Spark API remains the same as in 0.x versions, a few changes
have been made for long-term flexibility, especially in the Java API (to
support Java 8 lambdas). The documentation includes [migration
information]({{site.url}}docs/latest/programming-guide.html#migrating-from-pre-10-versions-of-spark)
to upgrade your applications.
### Contributors
The following developers contributed to this release:
Modified: spark/site/faq.html
URL:
http://svn.apache.org/viewvc/spark/site/faq.html?rev=1600486&r1=1600485&r2=1600486&view=diff
==============================================================================
--- spark/site/faq.html (original)
+++ spark/site/faq.html Wed Jun 4 20:18:25 2014
@@ -173,8 +173,8 @@ streaming, interactive queries, and mach
<p class="question">Which languages does Spark support?</p>
<p class="answer">Spark supports Scala, Java and Python.</p>
-<p class="question">Does Spark require modified versions of Scala or
Python?</p>
-<p class="answer">No. Spark requires no changes to Scala or compiler plugins.
The Python API uses the standard CPython implementation, and can call into
existing C libraries for Python such as NumPy.</p>
+<p class="question">How large a cluster can Spark scale to?</p>
+<p class="answer">We are aware of multiple deployments on over 1000 nodes.</p>
<p class="question">What happens when a cached dataset does not fit in
memory?</p>
<p class="answer">Spark can either spill it to disk or recompute the
partitions that don't fit in RAM each time they are requested. By default, it
uses recomputation, but you can set a dataset's <a
href="/docs/latest/scala-programming-guide.html#rdd-persistence">storage
level</a> to <code>MEMORY_AND_DISK</code> to avoid this. </p>
@@ -190,6 +190,9 @@ streaming, interactive queries, and mach
<p class="question">How can I access data in S3?</p>
<p class="answer">Use the <code>s3n://</code> URI scheme
(<code>s3n://bucket/path</code>). You will also need to set your Amazon
security credentials, either by setting the environment variables
<code>AWS_ACCESS_KEY_ID</code> and <code>AWS_SECRET_ACCESS_KEY</code> before
your program runs, or by setting <code>fs.s3.awsAccessKeyId</code> and
<code>fs.s3.awsSecretAccessKey</code> in
<code>SparkContext.hadoopConfiguration</code>.</p>
+<p class="question">Does Spark require modified versions of Scala or
Python?</p>
+<p class="answer">No. Spark requires no changes to Scala or compiler plugins.
The Python API uses the standard CPython implementation, and can call into
existing C libraries for Python such as NumPy.</p>
+
<p class="question">What are good resources for learning Scala?</p>
<p class="answer">Check out <a
href="http://www.artima.com/scalazine/articles/steps.html">First Steps to
Scala</a> for a quick introduction, the <a
href="http://www.scala-lang.org/docu/files/ScalaTutorial.pdf">Scala tutorial
for Java programmers</a>, or the free online book <a
href="http://www.artima.com/pins1ed/">Programming in Scala</a>. Scala is easy
to transition to if you have Java experience or experience in a similarly
high-level language (e.g. Ruby).</p>
Modified: spark/site/releases/spark-release-1-0-0.html
URL:
http://svn.apache.org/viewvc/spark/site/releases/spark-release-1-0-0.html?rev=1600486&r1=1600485&r2=1600486&view=diff
==============================================================================
--- spark/site/releases/spark-release-1-0-0.html (original)
+++ spark/site/releases/spark-release-1-0-0.html Wed Jun 4 20:18:25 2014
@@ -160,7 +160,7 @@
<h2>Spark Release 1.0.0</h2>
-<p>Spark 1.0.0 is a major release marking the start of the 1.X line. This
release brings both a variety of new features and strong API compatibility
guarantees throughout the 1.X line. Spark 1.0 adds a new major component, <a
href="/docs/1.0.0/sql-programming-guide.html">Spark SQL</a>, for loading and
manipulating structured data in Spark. It includes major extensions to all of
Sparkâs existing standard libraries (<a
href="/docs/1.0.0/mllib-guide.html">ML</a>, <a
href="/docs/1.0.0/streaming-programming-guide.html">Streaming</a>, and <a
href="/docs/1.0.0/graphx-programming-guide.html">GraphX</a>) while also
enhancing language support in Java and Python. Finally, Spark 1.0 brings
operational improvements including full support for the Hadoop/YARN security
model and a unified submission process for all supported cluster managers.</p>
+<p>Spark 1.0.0 is a major release marking the start of the 1.X line. This
release brings both a variety of new features and strong API compatibility
guarantees throughout the 1.X line. Spark 1.0 adds a new major component, <a
href="/docs/latest/sql-programming-guide.html">Spark SQL</a>, for loading and
manipulating structured data in Spark. It includes major extensions to all of
Sparkâs existing standard libraries (<a
href="/docs/latest/mllib-guide.html">ML</a>, <a
href="/docs/latest/streaming-programming-guide.html">Streaming</a>, and <a
href="/docs/latest/graphx-programming-guide.html">GraphX</a>) while also
enhancing language support in Java and Python. Finally, Spark 1.0 brings
operational improvements including full support for the Hadoop/YARN security
model and a unified submission process for all supported cluster managers.</p>
<p>You can download Spark 1.0.0 as either a
<a href="http://d3kbcqa49mib13.cloudfront.net/spark-1.0.0.tgz"
onclick="trackOutboundLink(this, 'Release Download Links',
'cloudfront_spark-1.0.0.tgz'); return false;">source package</a>
@@ -177,22 +177,22 @@
<p>For users running in secured Hadoop environments, Spark now integrates with
the Hadoop/YARN security model. Spark will authenticate job submission,
securely transfer HDFS credentials, and authenticate communication between
components.</p>
<h3 id="operational-and-packaging-improvements">Operational and Packaging
Improvements</h3>
-<p>This release significantly simplifies the process of bundling and
submitting a Spark application. A new <a
href="/docs/1.0.0/submitting-applications.html">spark-submit tool</a> allows
users to submit an application to any Spark cluster, including local clusters,
Mesos, or YARN, through a common process. The documentation for bundling Spark
applications has been substantially expanded. Weâve also added a history
server for Sparkâs web UI, allowing users to view Spark application data
after individual applications are finished.</p>
+<p>This release significantly simplifies the process of bundling and
submitting a Spark application. A new <a
href="/docs/latest/submitting-applications.html">spark-submit tool</a> allows
users to submit an application to any Spark cluster, including local clusters,
Mesos, or YARN, through a common process. The documentation for bundling Spark
applications has been substantially expanded. Weâve also added a history
server for Sparkâs web UI, allowing users to view Spark application data
after individual applications are finished.</p>
<h3 id="spark-sql">Spark SQL</h3>
-<p>This release introduces <a
href="/docs/1.0.0/sql-programming-guide.html">Spark SQL</a> as a new alpha
component. Spark SQL provides support for loading and manipulating structured
data in Spark, either from external structured data sources (currently Hive and
Parquet) or by adding a schema to an existing RDD. Spark SQLâs API
interoperates with the RDD data model, allowing users to interleave Spark code
with SQL statements. Under the hood, Spark SQL uses the Catalyst optimizer to
choose an efficient execution plan, and can automatically push predicates into
storage formats like Parquet. In future releases, Spark SQL will also provide a
common API to other storage systems.</p>
+<p>This release introduces <a
href="/docs/latest/sql-programming-guide.html">Spark SQL</a> as a new alpha
component. Spark SQL provides support for loading and manipulating structured
data in Spark, either from external structured data sources (currently Hive and
Parquet) or by adding a schema to an existing RDD. Spark SQLâs API
interoperates with the RDD data model, allowing users to interleave Spark code
with SQL statements. Under the hood, Spark SQL uses the Catalyst optimizer to
choose an efficient execution plan, and can automatically push predicates into
storage formats like Parquet. In future releases, Spark SQL will also provide a
common API to other storage systems.</p>
<h3 id="mllib-improvements">MLlib Improvements</h3>
-<p>In 1.0.0, Sparkâs MLlib adds support for sparse feature vectors in Scala,
Java, and Python. It takes advantage of sparsity in both storage and
computation in linear methods, k-means, and naive Bayes. In addition, this
release adds several new algorithms: scalable decision trees for both
classification and regression, distributed matrix algorithms including SVD and
PCA, model evaluation functions, and L-BFGS as an optimization primitive. The
programming guide and code examples for MLlib have also been greatly
expanded.</p>
+<p>In 1.0.0, Sparkâs MLlib adds support for sparse feature vectors in Scala,
Java, and Python. It takes advantage of sparsity in both storage and
computation in linear methods, k-means, and naive Bayes. In addition, this
release adds several new algorithms: scalable decision trees for both
classification and regression, distributed matrix algorithms including SVD and
PCA, model evaluation functions, and L-BFGS as an optimization primitive. The
<a href="/docs/latest/mllib-guide.html">MLlib programming guide</a> and code
examples have also been greatly expanded.</p>
<h3 id="graphx-and-streaming-improvements">GraphX and Streaming
Improvements</h3>
<p>In addition to usability and maintainability improvements, GraphX in Spark
1.0 brings substantial performance boosts in graph loading, edge reversal, and
neighborhood computation. These operations now require less communication and
produce simpler RDD graphs. Sparkâs Streaming module has added performance
optimizations for stateful stream transformations, along with improved Flume
support, and automated state cleanup for long running jobs.</p>
<h3 id="extended-java-and-python-support">Extended Java and Python Support</h3>
-<p>Spark 1.0 adds support for Java 8 <a
href="http://www.oracle.com/webfolder/technetwork/tutorials/obe/java/Lambda-QuickStart/index.html#section2">new
lambda syntax</a> in its Java bindings. Java 8 supports a concise syntax for
writing anonymous functions, similar to the closure syntax in Scala and Python.
This change requires small changes for users of the current Java API, which are
noted in the documentation. Sparkâs Python API has been extended to support
several new functions. Weâve also included several stability improvements in
the Python API, particularly for large datasets. PySpark now supports running
on YARN as well.</p>
+<p>Spark 1.0 adds support for Java 8 <a
href="http://docs.oracle.com/javase/tutorial/java/javaOO/lambdaexpressions.html">new
lambda syntax</a> in its Java bindings. Java 8 supports a concise syntax for
writing anonymous functions, similar to the closure syntax in Scala and Python.
This change requires small changes for users of the current Java API, which are
noted in the documentation. Sparkâs Python API has been extended to support
several new functions. Weâve also included several stability improvements in
the Python API, particularly for large datasets. PySpark now supports running
on YARN as well.</p>
<h3 id="documentation">Documentation</h3>
-<p>Sparkâs programming guide has been significantly expanded to centrally
cover all supported languages and discuss more operators and aspects of the
development life cycle. The MLlib guide has also been expanded with
significantly more detail and examples for each algorithm, while documents on
configuration, YARN and Mesos have also been revamped.</p>
+<p>Sparkâs <a href="/docs/latest/programming-guide.html">programming
guide</a> has been significantly expanded to centrally cover all supported
languages and discuss more operators and aspects of the development life cycle.
The <a href="/docs/latest/mllib-guide.html">MLlib guide</a> has also been
expanded with significantly more detail and examples for each algorithm, while
documents on configuration, YARN and Mesos have also been revamped.</p>
<h3 id="smaller-changes">Smaller Changes</h3>
<ul>
@@ -202,12 +202,12 @@
<li>Support for off-heap storage in Tachyon has been added via a special
build target.</li>
<li>Datasets persisted with <code>DISK_ONLY</code> now write directly to
disk, significantly improving memory usage for large datasets.</li>
<li>Intermediate state created during a Spark job is now garbage collected
when the corresponding RDDs become unreferenced, improving performance.</li>
- <li>Spark now includes a <a href="/docs/1.0.0/api/java/index.html">Javadoc
version</a> of all its API docs and a <a
href="/docs/1.0.0/api/scala/index.html">unified Scaladoc</a> for all
modules.</li>
+ <li>Spark now includes a <a href="/docs/latest/api/java/index.html">Javadoc
version</a> of all its API docs and a <a
href="/docs/latest/api/scala/index.html">unified Scaladoc</a> for all
modules.</li>
<li>A new SparkContext.wholeTextFiles method lets you operate on small text
files as individual records.</li>
</ul>
<h3 id="migrating-to-spark-10">Migrating to Spark 1.0</h3>
-<p>While most of the Spark API remains the same as in 0.x versions, a few
changes have been made for long-term flexibility, especially in the Java API
(to support Java 8 lambdas). The documentation includes <a
href="/docs/1.0.0/programming-guide.html#migrating-from-pre-10-versions-of-spark">migration
information</a> to upgrade your applications.</p>
+<p>While most of the Spark API remains the same as in 0.x versions, a few
changes have been made for long-term flexibility, especially in the Java API
(to support Java 8 lambdas). The documentation includes <a
href="/docs/latest/programming-guide.html#migrating-from-pre-10-versions-of-spark">migration
information</a> to upgrade your applications.</p>
<h3 id="contributors">Contributors</h3>
<p>The following developers contributed to this release:</p>