In order to facilitate community testing of Spark 1.6.0, I'm excited to
announce the availability of an early preview of the release. This is not a
release candidate, so there is no voting involved. However, it'd be awesome
if community members can start testing with this preview package and report
any problems they encounter.

This preview package contains all the commits to branch-1.6
<> till commit

The staging maven repository for this preview build can be found here:

Binaries for this preview build can be found here:

A build of the docs can also be found here:

The full change log for this release can be found on JIRA

*== How can you help? ==*

If you are a Spark user, you can help us test this release by taking a
Spark workload and running on this preview release, then reporting any

*== Major Features ==*

When testing, we'd appreciate it if users could focus on areas that have
changed in this release.  Some notable new features include:

SPARK-11787 <> *Parquet
Performance* - Improve Parquet scan performance when using flat schemas.
SPARK-10810 <> *Session *
*Management* - Multiple users of the thrift (JDBC/ODBC) server now have
isolated sessions including their own default database (i.e USE mydb) even
on shared clusters.
SPARK-9999  <> *Dataset API* -
A new, experimental type-safe API (similar to RDDs) that performs many
operations on serialized binary data and code generation (i.e. Project
SPARK-10000 <> *Unified
Memory Management* - Shared memory for execution and caching instead of
exclusive division of the regions.
SPARK-10978 <> *Datasource
API Avoid Double Filter* - When implementing a datasource with filter
pushdown, developers can now tell Spark SQL to avoid double evaluating a
pushed-down filter.
SPARK-2629  <> *New
improved state management* - trackStateByKey - a DStream transformation for
stateful stream processing, supersedes updateStateByKey in functionality
and performance.

Happy testing!


Reply via email to