Github user liancheng commented on a diff in the pull request:
https://github.com/apache/spark/pull/1880#discussion_r16052399
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/columnar/InMemoryColumnarTableScan.scala
---
@@ -90,22 +101,31 @@ private[sql] case class InMemoryColumnarTableScan(
override def execute() = {
relation.cachedColumnBuffers.mapPartitions { iterator =>
- val columnBuffers = iterator.next()
- assert(!iterator.hasNext)
+ // Find the ordinals of the requested columns. If none are
requested, use the first.
+ val requestedColumns =
+ if (attributes.isEmpty) {
+ Seq(0)
+ } else {
+ attributes.map(a => relation.output.indexWhere(_.exprId ==
a.exprId))
+ }
new Iterator[Row] {
- // Find the ordinals of the requested columns. If none are
requested, use the first.
- val requestedColumns =
- if (attributes.isEmpty) {
- Seq(0)
- } else {
- attributes.map(a => relation.output.indexWhere(_.exprId ==
a.exprId))
- }
+ private[this] var columnBuffers: Array[ByteBuffer] = null
+ private[this] var columnAccessors: Seq[ColumnAccessor] = null
+ nextBatch()
--- End diff --
Maybe I don't get it correctly, but do you mean we should try to reuse
batch buffers rather than always allocate new ones for a new batch? I like the
idea, and it can surely make the column buffer building process more memory
efficient. But currently due to the way `ColumnBuilder` is implemented, buffer
reusing needs more work to be done, probably in another PR :)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]