[ 
https://issues.apache.org/jira/browse/HIVE-10685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15510822#comment-15510822
 ] 

Prasanth Jayachandran commented on HIVE-10685:
----------------------------------------------

It got committed 
https://github.com/apache/hive/commit/aef08f44e29e9a54e73b8029892033fe16c52cc5

> Alter table concatenate oparetor will cause duplicate data
> ----------------------------------------------------------
>
>                 Key: HIVE-10685
>                 URL: https://issues.apache.org/jira/browse/HIVE-10685
>             Project: Hive
>          Issue Type: Bug
>    Affects Versions: 0.14.0, 1.0.0, 1.2.0, 1.1.0, 1.3.0, 1.2.1
>            Reporter: guoliming
>            Assignee: guoliming
>            Priority: Critical
>             Fix For: 1.2.1
>
>         Attachments: HIVE-10685.patch, HIVE-10685.patch
>
>
> "Orders" table has 1500000000 rows and stored as ORC. 
> {noformat}
> hive> select count(*) from orders;
> OK
> 1500000000
> Time taken: 37.692 seconds, Fetched: 1 row(s)
> {noformat}
> The table contain 14 files,the size of each file is about 2.1 ~ 3.2 GB.
> After executing command : ALTER TABLE orders CONCATENATE;
> The table is already 1530115000 rows.
> My hive version is 1.1.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to