[ 
https://issues.apache.org/jira/browse/HIVE-10685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15863297#comment-15863297
 ] 

wangbaoyun commented on HIVE-10685:
-----------------------------------

if the orc file had been merged, the EOFException like "java.io.EOFException: 
Read past end of RLE integer from compressed stream Stream for column 1 kind 
DATA" and the stripe index out of the range happened, how to fix the merged 
file?

> Alter table concatenate oparetor will cause duplicate data
> ----------------------------------------------------------
>
>                 Key: HIVE-10685
>                 URL: https://issues.apache.org/jira/browse/HIVE-10685
>             Project: Hive
>          Issue Type: Bug
>    Affects Versions: 0.14.0, 1.0.0, 1.2.0, 1.1.0, 1.3.0, 1.2.1
>            Reporter: guoliming
>            Assignee: guoliming
>            Priority: Critical
>             Fix For: 1.2.1
>
>         Attachments: HIVE-10685.patch, HIVE-10685.patch
>
>
> "Orders" table has 1500000000 rows and stored as ORC. 
> {noformat}
> hive> select count(*) from orders;
> OK
> 1500000000
> Time taken: 37.692 seconds, Fetched: 1 row(s)
> {noformat}
> The table contain 14 files,the size of each file is about 2.1 ~ 3.2 GB.
> After executing command : ALTER TABLE orders CONCATENATE;
> The table is already 1530115000 rows.
> My hive version is 1.1.0.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

Reply via email to