GitHub user nongli opened a pull request:
https://github.com/apache/spark/pull/10727
[SPARK-12785][SQL] Add ColumnarBatch, an in memory columnar format for
execution.
There are many potential benefits of having an efficient in memory columnar
format as an alternate
to UnsafeRow. This patch introduces ColumnarBatch/ColumnarVector which
starts this effort. The
remaining implementation can be done as follow up patches.
As stated in the in the JIRA, there are useful external components that
operate on memory in a
simple columnar format. ColumnarBatch would serve that purpose and could
server as a
zero-serialization/zero-copy exchange for this use case.
This patch supports running the underlying data either on heap or off heap.
On heap runs a bit
faster but we would need offheap for zero-copy exchanges. Currently, this
mode is hidden behind one
interface (ColumnVector).
This differs from Parquet or the existing columnar cache because this is
*not* intended to be used
as a storage format. The focus is entirely on CPU efficiency as we expect
to only have 1 of these
batches in memory per task.
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/nongli/spark spark-12785
Alternatively you can review and apply these changes as the patch at:
https://github.com/apache/spark/pull/10727.patch
To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:
This closes #10727
----
commit ceb52690487f8e56c7a0451092553b98ca540221
Author: Nong <[email protected]>
Date: 2016-01-01T05:12:44Z
[SPARK-12785][SQL] Add ColumnarBatch, an in memory columnar format for
execution.
There are many potential benefits of having an efficient in memory columnar
format as an alternate
to UnsafeRow. This patch introduces ColumnarBatch/ColumnarVector which
starts this effort. The
remaining implementation can be done as follow up patches.
As stated in the in the JIRA, there are useful external components that
operate on memory in a
simple columnar format. ColumnarBatch would serve that purpose and could
server as a
zero-serialization/zero-copy exchange for this use case.
This patch supports running the underlying data either on heap or off heap.
On heap runs a bit
faster but we would need offheap for zero-copy exchanges. Currently, this
mode is hidden behind one
interface (ColumnVector).
This differs from Parquet or the existing columnar cache because this is
*not* intended to be used
as a storage format. The focus is entirely on CPU efficiency as we expect
to only have 1 of these
batches in memory per task.
commit 57314e5c76e5de20bd4cb0a6dca8a1888cd3d391
Author: Nong Li <[email protected]>
Date: 2016-01-07T05:14:08Z
CR
commit 2a09ff0e24249368746ee369b692992c94617e49
Author: Nong Li <[email protected]>
Date: 2016-01-12T00:40:26Z
Fix double put api.
commit cde12f410502414ab7479d736e089d6c55d78e2b
Author: Nong Li <[email protected]>
Date: 2016-01-12T02:06:30Z
Fix imports and rebase.
commit be39d750dc983a64d2fc3549106f609747f0731a
Author: Nong Li <[email protected]>
Date: 2016-01-12T05:52:08Z
Support java 7 iterator interface.
----
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]