pan3793 opened a new issue, #5377:
URL: https://github.com/apache/kyuubi/issues/5377

   ### Code of Conduct
   
   - [X] I agree to follow this project's [Code of 
Conduct](https://www.apache.org/foundation/policies/conduct)
   
   
   ### Search before creating
   
   - [X] I have searched in the [task 
list](https://github.com/orgs/apache/projects/296) and found no similar tasks.
   
   
   ### Mentor
   
   - [ ] I have sufficient knowledge and experience of this task, and I 
volunteer to be the mentor of this task to guide contributors to complete the 
task.
   
   
   ### Skill requirements
   
   - Basic knowledge on Scala Programing Language
   - Familiar with Apache Spark
   
   
   ### Background and Goals
   
   The client's SQL query may cause the Spark engine to fail because the query 
data result is too large, and the driver to fail due to OOM.
   Although the number of query results can be limited by configuring 
`kyuubi.operation.result.max.rows`, if the amount of data in one row is too 
large, it will still cause OOM.
   If it can support writing query results to HDFS or other storage systems, 
when the client needs to obtain the results, the engine will obtain the results 
from HDFS, which can avoid the OOM problem.
   
   
   ### Implementation steps
   
   N/A
   
   ### Additional context
   
   Original reporter is @cxzl25 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to