[ 
https://issues.apache.org/jira/browse/HIVE-12049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vaibhav Gumashta updated HIVE-12049:
------------------------------------
    Description: 
For each fetch request to HiveServer2, we pay the penalty of deserializing the 
row objects and translating them into a different representation suitable for 
the RPC transfer. In a moderate to high concurrency scenarios, this can result 
in significant CPU and memory wastage. By having each task write the 
appropriate thrift objects to the output files, HiveServer2 can simply stream a 
batch of rows on the wire without incurring any of the additional cost of 
deserialization and translation. 
This can be implemented by writing a new SerDe, which the FileSinkOperator can 
use to write thrift formatted row batches to the output file. Using the 
pluggable property of the {{hive.query.result.fileformat}}, we can set it to 
use SequenceFile and write a batch of thrift formatted rows as a value blob. 
The FetchTask can now simply read the blob and send it over the wire. On the 
client side, the *DBC driver can read the blob and since it is already 
formatted in the way it expects, it can continue building the ResultSet the way 
it does in the current implementation.


  was:
For each fetch request to HiveServer2, we pay the penalty of deserializing the 
row objects and translating them into a different representation suitable for 
the RPC transfer. In a moderate to high concurrency scenarios, this can result 
in significant CPU and memory wastage. By having each task write the 
appropriate thrift objects to the output files, HiveServer2 can simply stream a 
batch of rows on the wire without incurring any of the additional cost of 
deserialization and translation. 
This can be implemented by writing a new SerDe, which the FileSinkOperator can 
use to write thrift formatted row batches to the output file. Using the 
pluggable property of the hive.query.result.fileformat, we can set it to use 
SequenceFile and write a batch of thrift formatted rows as a value blob. The 
FetchTask can now simply read the blob and send it over the wire. On the client 
side, the *DBC driver can read the blob and since it is already formatted in 
the way it expects, it can continue building the ResultSet the way it does in 
the current implementation.



> Provide an option to write serialized thrift objects in final tasks
> -------------------------------------------------------------------
>
>                 Key: HIVE-12049
>                 URL: https://issues.apache.org/jira/browse/HIVE-12049
>             Project: Hive
>          Issue Type: Sub-task
>          Components: HiveServer2
>            Reporter: Rohit Dholakia
>            Assignee: Rohit Dholakia
>
> For each fetch request to HiveServer2, we pay the penalty of deserializing 
> the row objects and translating them into a different representation suitable 
> for the RPC transfer. In a moderate to high concurrency scenarios, this can 
> result in significant CPU and memory wastage. By having each task write the 
> appropriate thrift objects to the output files, HiveServer2 can simply stream 
> a batch of rows on the wire without incurring any of the additional cost of 
> deserialization and translation. 
> This can be implemented by writing a new SerDe, which the FileSinkOperator 
> can use to write thrift formatted row batches to the output file. Using the 
> pluggable property of the {{hive.query.result.fileformat}}, we can set it to 
> use SequenceFile and write a batch of thrift formatted rows as a value blob. 
> The FetchTask can now simply read the blob and send it over the wire. On the 
> client side, the *DBC driver can read the blob and since it is already 
> formatted in the way it expects, it can continue building the ResultSet the 
> way it does in the current implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to