Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/16440#discussion_r94272750
--- Diff:
sql/hive-thriftserver/src/main/scala/org/apache/spark/sql/hive/thriftserver/SparkExecuteStatementOperation.scala
---
@@ -111,9 +115,15 @@ private[hive] class SparkExecuteStatementOperation(
// Reset iter to header when fetching start from first row
if (order.equals(FetchOrientation.FETCH_FIRST)) {
- val (ita, itb) = iterHeader.duplicate
- iter = ita
- iterHeader = itb
+ iter = if (useIncrementalCollect) {
+ resultList = None
+ result.toLocalIterator.asScala
+ } else {
+ if (resultList.isEmpty) {
--- End diff --
I agree that this makes the implicit buffering implicit. So, if an iterator
is duplicated into A and B, and all of A is consumed, then B will internally
buffer everything from A so it can be replayed? and in our case, we know that A
will be entirely consumed? then these are basically the same, yes.
But, does that solve the problem? this now always stores the whole result
set locally. Is this avoiding a second whole copy of it?
What if you always just return result.collect().iterator here -- the
problem is the re-collecting the result every time?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]