This is an automated email from the ASF dual-hosted git repository.

chengpan pushed a commit to branch branch-1.10
in repository https://gitbox.apache.org/repos/asf/kyuubi.git


The following commit(s) were added to refs/heads/branch-1.10 by this push:
     new 245f005bbe [KYUUBI #6900] [DOCS] Correct spelling errors in 
'large_query_results' part
245f005bbe is described below

commit 245f005bbe5edb822899adf4278118c2321e427e
Author: dongshuyou <[email protected]>
AuthorDate: Thu Jan 23 19:48:11 2025 +0800

    [KYUUBI #6900] [DOCS] Correct spelling errors in 'large_query_results' part
    
    ### Why are the changes needed?
    Correct spelling make the documentation better.
    ### How was this patch tested?
    No need.
    ### Was this patch authored or co-authored using generative AI tooling?
    No.
    
    Closes #6900 from shuyouZZ/new-branch.
    
    Closes #6900
    
    27220abaf [dongshuyou] [DOCS] Correct spelling errors in 
'large_query_results' part
    
    Authored-by: dongshuyou <[email protected]>
    Signed-off-by: Cheng Pan <[email protected]>
    (cherry picked from commit fee4899fdc54c44dce31e3636bf9a0dbe84a63d4)
    Signed-off-by: Cheng Pan <[email protected]>
---
 docs/deployment/spark/large_query_results.md | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/docs/deployment/spark/large_query_results.md 
b/docs/deployment/spark/large_query_results.md
index a2b07a5574..ad9edd6f75 100644
--- a/docs/deployment/spark/large_query_results.md
+++ b/docs/deployment/spark/large_query_results.md
@@ -191,7 +191,7 @@ kyuubi-beeline -u 'jdbc:kyuubi://kyuubi:10009/' \
 Note: Arrow serialization requires the client to support deserializing Arrow 
format results, you must use Kyuubi
 JDBC driver v1.7.0 or higher, third-party clients like Apache Hive JDBC 
driver, PyHive does not work.
 
-## Sava query results into files
+## Save query results into files
 
 Since v1.9.0, Kyuubi Spark SQL engine supports saving the query results into 
files instead of collecting to the
 Spark driver memory before retrieving to client. This dramatically reduces the 
memory pressure of the Spark driver,

Reply via email to