[ 
https://issues.apache.org/jira/browse/HIVE-11983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14940549#comment-14940549
 ] 

Eugene Koifman commented on HIVE-11983:
---------------------------------------

DelimitedInputorWriter:215 - why did you changes this to StringBuffer?
AbstractRecordWriter:80 - why change how class is loaded?
StrictJsonWriter: the 2 c'tor seem identical except for HiveConf.  Could 1st 
one use this(endPoint, null)?
getObjectInspectorsForBucketedCols() seems exactly the same as in 
DelimitedInputWriter
getBucketFields() - same as above

The write(long, byte[]) methods on the 2 writes: 1 calls reorderFields() the 
other does not.  Is that intentional?

TestStreaming
this has driver.run("set ....") - Driver doesn't support "set" command, so all 
of these are guaranteed to fail.

> Hive streaming API uses incorrect logic to assign buckets to incoming records
> -----------------------------------------------------------------------------
>
>                 Key: HIVE-11983
>                 URL: https://issues.apache.org/jira/browse/HIVE-11983
>             Project: Hive
>          Issue Type: Bug
>          Components: HCatalog, Transactions
>    Affects Versions: 1.2.1
>            Reporter: Roshan Naik
>            Assignee: Roshan Naik
>              Labels: streaming, streaming_api
>         Attachments: HIVE-11983.3.patch, HIVE-11983.4.patch, HIVE-11983.patch
>
>
> The Streaming API tries to distribute records evenly into buckets. 
> All records in every Transaction that is part of TransactionBatch goes to the 
> same bucket and a new bucket number is chose for each TransactionBatch.
> Fix: API needs to hash each record to determine which bucket it belongs to. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to