[
https://issues.apache.org/jira/browse/HIVE-14901?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15898886#comment-15898886
]
Lefty Leverenz commented on HIVE-14901:
---------------------------------------
Doc note: This removes *hive.server2.resultset.default.fetch.size* (introduced
by HIVE-14876 for 2.2.0) and adds
*hive.server2.thrift.resultset.default.fetch.size*, so the latter needs to be
documented in the wiki and the former does not need to be documented.
* [Configuration Properties -- HiveServer2 |
https://cwiki.apache.org/confluence/display/Hive/Configuration+Properties#ConfigurationProperties-HiveServer2]
Added a TODOC2.2 label.
> HiveServer2: Use user supplied fetch size to determine #rows serialized in
> tasks
> --------------------------------------------------------------------------------
>
> Key: HIVE-14901
> URL: https://issues.apache.org/jira/browse/HIVE-14901
> Project: Hive
> Issue Type: Sub-task
> Components: HiveServer2, JDBC, ODBC
> Affects Versions: 2.1.0
> Reporter: Vaibhav Gumashta
> Assignee: Norris Lee
> Labels: TODOC2.2
> Attachments: HIVE-14901.1.patch, HIVE-14901.2.patch,
> HIVE-14901.3.patch, HIVE-14901.4.patch, HIVE-14901.5.patch,
> HIVE-14901.6.patch, HIVE-14901.7.patch, HIVE-14901.8.patch,
> HIVE-14901.9.patch, HIVE-14901.patch
>
>
> Currently, we use {{hive.server2.thrift.resultset.max.fetch.size}} to decide
> the max number of rows that we write in tasks. However, we should ideally use
> the user supplied value (which can be extracted from the
> ThriftCLIService.FetchResults' request parameter) to decide how many rows to
> serialize in a blob in the tasks. We should however use
> {{hive.server2.thrift.resultset.max.fetch.size}} to have an upper bound on
> it, so that we don't go OOM in tasks and HS2.
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)