[
https://issues.apache.org/jira/browse/HIVE-12049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15255486#comment-15255486
]
Lefty Leverenz commented on HIVE-12049:
---------------------------------------
Doc note: This adds two configuration parameters
(*hive.server2.thrift.resultset.serialize.in.tasks* and
*hive.server2.thrift.resultset.max.fetch.size*) to HiveConf.java, so they will
need to be documented for release 2.1.0 in the HiveServer2 section of
Configuration Properties.
* [Configuration Properties -- HiveServer2 |
https://cwiki.apache.org/confluence/display/Hive/Configuration+Properties#ConfigurationProperties-HiveServer2]
Should there also be general documentation in one of the HiveServer2 docs?
* [Setting Up HiveServer2 |
https://cwiki.apache.org/confluence/display/Hive/Setting+Up+HiveServer2]
* [HiveServer2 Clients |
https://cwiki.apache.org/confluence/display/Hive/HiveServer2+Clients]
> HiveServer2: Provide an option to write serialized thrift objects in final
> tasks
> --------------------------------------------------------------------------------
>
> Key: HIVE-12049
> URL: https://issues.apache.org/jira/browse/HIVE-12049
> Project: Hive
> Issue Type: Sub-task
> Components: HiveServer2, JDBC
> Affects Versions: 2.0.0
> Reporter: Rohit Dholakia
> Assignee: Rohit Dholakia
> Labels: TODOC2.1
> Fix For: 2.1.0
>
> Attachments: HIVE-12049.1.patch, HIVE-12049.11.patch,
> HIVE-12049.12.patch, HIVE-12049.13.patch, HIVE-12049.14.patch,
> HIVE-12049.15.patch, HIVE-12049.16.patch, HIVE-12049.17.patch,
> HIVE-12049.18.patch, HIVE-12049.19.patch, HIVE-12049.2.patch,
> HIVE-12049.25.patch, HIVE-12049.26.patch, HIVE-12049.3.patch,
> HIVE-12049.4.patch, HIVE-12049.5.patch, HIVE-12049.6.patch,
> HIVE-12049.7.patch, HIVE-12049.9.patch, new-driver-profiles.png,
> old-driver-profiles.png
>
>
> For each fetch request to HiveServer2, we pay the penalty of deserializing
> the row objects and translating them into a different representation suitable
> for the RPC transfer. In a moderate to high concurrency scenarios, this can
> result in significant CPU and memory wastage. By having each task write the
> appropriate thrift objects to the output files, HiveServer2 can simply stream
> a batch of rows on the wire without incurring any of the additional cost of
> deserialization and translation.
> This can be implemented by writing a new SerDe, which the FileSinkOperator
> can use to write thrift formatted row batches to the output file. Using the
> pluggable property of the {{hive.query.result.fileformat}}, we can set it to
> use SequenceFile and write a batch of thrift formatted rows as a value blob.
> The FetchTask can now simply read the blob and send it over the wire. On the
> client side, the *DBC driver can read the blob and since it is already
> formatted in the way it expects, it can continue building the ResultSet the
> way it does in the current implementation.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)