Github user kiszk commented on the issue:
https://github.com/apache/spark/pull/11956
@davies, thank you for your comment. I hope that you will have bandwidth
soon since Spark 2.0 was released.
[this PR](https://github.com/apache/spark/pull/13899/files) does the same
thing. In particular, generated code for reading a column is almost the same.
The difference is to use the conventional `CachedBatch` that uses
`Array[Byte]` or to use the new `CachedBatchByte` that may use `ColumnarBatch`
created by [generated code
](https://gist.github.com/andrewor14/a9ed9d942029457a0f953e809ac26ee9). I like
simplify my PR by using the idea in [the
PR](https://github.com/apache/spark/pull/13899/files). For example, I can throw
away new files `ByteBufferColumnVector.java` and `PassThroughSuite.scala`.
I have two question in [the
PR](https://github.com/apache/spark/pull/13899/files).
1. Do we use the conventional `CachedBatch` or `ColumnarBatch` for cache?
2. In this implementation, how a cache content in `ColumnarBatch` will be
serialized when it must be flushed into a disk?
3. What test cases were failed? Links to test results are not valid now.
4. Will we support compression scheme in the future while we use
`ColumnarBatch`?
What do you think?
2.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]