This is an automated email from the ASF dual-hosted git repository.

yao pushed a commit to branch branch-1.5
in repository https://gitbox.apache.org/repos/asf/incubator-kyuubi.git


The following commit(s) were added to refs/heads/branch-1.5 by this push:
     new f581e81  [KYUUBI #1215][DOC] Document incremental collection
f581e81 is described below

commit f581e815a2ccf3c6f20aaf292652a52b599fa9a3
Author: Cheng Pan <[email protected]>
AuthorDate: Tue Mar 8 20:49:51 2022 +0800

    [KYUUBI #1215][DOC] Document incremental collection
    
    ### _Why are the changes needed?_
    
    Document incremental collection, close #1215
    
    ### _How was this patch tested?_
    - [ ] Add some test cases that check the changes thoroughly including 
negative and positive cases if possible
    
    - [x] Add screenshots for manual tests if appropriate
    
    <img width="1914" alt="1" 
src="https://user-images.githubusercontent.com/26535726/157191008-451bee2a-eb6b-4bb6-869b-5b0f75a21448.png";>
    <img width="1919" alt="2" 
src="https://user-images.githubusercontent.com/26535726/157191016-e183bbf5-aa4a-496d-a250-5d14cf04101d.png";>
    <img width="1920" alt="3" 
src="https://user-images.githubusercontent.com/26535726/157191026-343a39d7-0ab8-4886-9a51-3670394ef6be.png";>
    
    - [ ] [Run 
test](https://kyuubi.apache.org/docs/latest/develop_tools/testing.html#running-tests)
 locally before make a pull request
    
    Closes #2057 from pan3793/doc.
    
    Closes #1215
    
    82677a03 [Cheng Pan] grammar
    3be0e3f1 [Cheng Pan] fix
    b467e975 [Cheng Pan] Update
    d256cbaf [Cheng Pan] compress picture
    b3c5fb64 [Cheng Pan] Fix
    4c06307c [Cheng Pan] [KYUUBI #1215][DOC] Document incremental collection
    
    Authored-by: Cheng Pan <[email protected]>
    Signed-off-by: Kent Yao <[email protected]>
    (cherry picked from commit 54dfb4bbc0edaf399a08f13245e3d33057d1d6cd)
    Signed-off-by: Kent Yao <[email protected]>
---
 docs/deployment/spark/incremental_collection.md | 126 ++++++++++++++++++++++++
 docs/deployment/spark/index.rst                 |   1 +
 docs/imgs/spark/incremental_collection.png      | Bin 0 -> 119034 bytes
 3 files changed, 127 insertions(+)

diff --git a/docs/deployment/spark/incremental_collection.md 
b/docs/deployment/spark/incremental_collection.md
new file mode 100644
index 0000000..7afa3d0
--- /dev/null
+++ b/docs/deployment/spark/incremental_collection.md
@@ -0,0 +1,126 @@
+<!--
+ - Licensed to the Apache Software Foundation (ASF) under one or more
+ - contributor license agreements.  See the NOTICE file distributed with
+ - this work for additional information regarding copyright ownership.
+ - The ASF licenses this file to You under the Apache License, Version 2.0
+ - (the "License"); you may not use this file except in compliance with
+ - the License.  You may obtain a copy of the License at
+ -
+ -   http://www.apache.org/licenses/LICENSE-2.0
+ -
+ - Unless required by applicable law or agreed to in writing, software
+ - distributed under the License is distributed on an "AS IS" BASIS,
+ - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ - See the License for the specific language governing permissions and
+ - limitations under the License.
+ -->
+
+<div align=center>
+
+![](../../imgs/kyuubi_logo.png)
+
+</div>
+
+# Solution for Big Result Sets
+
+Typically, when a user submits a SELECT query to Spark SQL engine, the Driver 
calls `collect` to trigger calculation and
+collect the entire data set of all tasks(a.k.a. partitions of an RDD), after 
all partitions data arrived, then the
+client pulls the result set from the Driver through the Kyuubi Server in small 
batch.
+
+Therefore, the bottleneck is the Spark Driver for a query with a big result 
set. To avoid OOM, Spark has a configuration
+`spark.driver.maxResultSize` which default is `1g`, you should enlarge it as 
well as `spark.driver.memory` if your
+query has result set in several GB. But what if the result set size is dozens 
GB or event hundreds GB? It would be best
+if you have incremental collection mode.
+
+## Incremental collection
+
+Since v1.4.0-incubating, Kyuubi supports incremental collection mode, it is a 
solution for big result sets. This feature
+is disabled in default, you can turn on it by setting the configuration 
`kyuubi.operation.incremental.collect` to `true`.
+
+The incremental collection changes the gather method from `collect` to 
`toLocalIterator`. `toLocalIterator` is a Spark
+action that sequentially submits Jobs to retrieve partitions. As each 
partition is retrieved, the client through pulls
+the result set from the Driver through the Kyuubi Server streamingly. It 
reduces the Driver memory significantly from
+the size of the complete result set to the maximum partition.
+
+The incremental collection is not the silver bullet, you should turn it on 
carefully, because it can significantly hurt
+performance. And even in incremental collection mode, when multiple queries 
execute concurrently, each query still requires
+one partition of data in Driver memory. Therefore, it is still important to 
control the number of concurrent queries to
+avoid OOM.
+
+## Use in single connections
+
+As above explains, the incremental collection mode is not suitable for common 
query sense, you can enable incremental
+collection mode for specific queries by using
+
+```
+beeline -u 
'jdbc:hive2://kyuubi:10009/?spark.driver.maxResultSize=8g;spark.driver.memory=12g#kyuubi.engine.share.level=CONNECTION;kyuubi.operation.incremental.collect=true'
 \
+    --incremental=true \
+    -f big_result_query.sql
+```
+
+`--incremental=true` is required for beeline client, otherwise, the entire 
result sets is fetched and buffered before
+being displayed, which may cause client side OOM.
+
+## Change incremental collection mode in session
+
+The configuration `kyuubi.operation.incremental.collect` can also be changed 
using `SET` in session.
+
+```
+~ beeline -u 'jdbc:hive2://localhost:10009'
+Connected to: Apache Kyuubi (Incubating) (version 1.5.0-SNAPSHOT)
+
+0: jdbc:hive2://localhost:10009/> set 
kyuubi.operation.incremental.collect=true;
++---------------------------------------+--------+
+|                  key                  | value  |
++---------------------------------------+--------+
+| kyuubi.operation.incremental.collect  | true   |
++---------------------------------------+--------+
+1 row selected (0.039 seconds)
+
+0: jdbc:hive2://localhost:10009/> select /*+ REPARTITION(5) */ * from range(1, 
10);
++-----+
+| id  |
++-----+
+| 2   |
+| 6   |
+| 7   |
+| 0   |
+| 5   |
+| 3   |
+| 4   |
+| 1   |
+| 8   |
+| 9   |
++-----+
+10 rows selected (1.929 seconds)
+
+0: jdbc:hive2://localhost:10009/> set 
kyuubi.operation.incremental.collect=false;
++---------------------------------------+--------+
+|                  key                  | value  |
++---------------------------------------+--------+
+| kyuubi.operation.incremental.collect  | false   |
++---------------------------------------+--------+
+1 row selected (0.027 seconds)
+
+0: jdbc:hive2://localhost:10009/> select /*+ REPARTITION(5) */ * from range(1, 
10);
++-----+
+| id  |
++-----+
+| 2   |
+| 6   |
+| 7   |
+| 0   |
+| 5   |
+| 3   |
+| 4   |
+| 1   |
+| 8   |
+| 9   |
++-----+
+10 rows selected (0.128 seconds)
+```
+
+From the Spark UI, we can see that in incremental collection mode, the query 
produces 5 jobs (in red square), and in
+normal mode, only produces 1 job (in blue square).
+
+![](../../imgs/spark/incremental_collection.png)
diff --git a/docs/deployment/spark/index.rst b/docs/deployment/spark/index.rst
index e1ae679..7cbd26c 100644
--- a/docs/deployment/spark/index.rst
+++ b/docs/deployment/spark/index.rst
@@ -32,3 +32,4 @@ Even if you don't use Kyuubi, as a simple Spark user, I'm 
sure you'll find the n
 
     dynamic_allocation
     aqe
+    incremental_collection
diff --git a/docs/imgs/spark/incremental_collection.png 
b/docs/imgs/spark/incremental_collection.png
new file mode 100644
index 0000000..f03481b
Binary files /dev/null and b/docs/imgs/spark/incremental_collection.png differ

Reply via email to