[ 
https://issues.apache.org/jira/browse/HIVE-14901?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15553034#comment-15553034
 ] 

Sergey Shelukhin commented on HIVE-14901:
-----------------------------------------

We don't even really use the config in all cases, see HIVE-14876.

> HiveServer2: Use user supplied fetch size to determine #rows serialized in 
> tasks
> --------------------------------------------------------------------------------
>
>                 Key: HIVE-14901
>                 URL: https://issues.apache.org/jira/browse/HIVE-14901
>             Project: Hive
>          Issue Type: Sub-task
>          Components: HiveServer2, JDBC, ODBC
>    Affects Versions: 2.1.0
>            Reporter: Vaibhav Gumashta
>
> Currently, we use {{hive.server2.thrift.resultset.max.fetch.size}} to decide 
> the max number of rows that we write in tasks. However, we should ideally use 
> the user supplied value (which can be extracted from the 
> ThriftCLIService.FetchResults' request parameter) to decide how many rows to 
> serialize in a blob in the tasks. We should however use 
> {{hive.server2.thrift.resultset.max.fetch.size}} to have an upper bound on 
> it, so that we don't go OOM in tasks and HS2. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to