[
https://issues.apache.org/jira/browse/HIVE-15199?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Yongzhi Chen updated HIVE-15199:
--------------------------------
Comment: was deleted
(was: The 8th patch LGTM +1)
> INSERT INTO data on S3 is replacing the old rows with the new ones
> ------------------------------------------------------------------
>
> Key: HIVE-15199
> URL: https://issues.apache.org/jira/browse/HIVE-15199
> Project: Hive
> Issue Type: Bug
> Components: Hive
> Reporter: Sergio Peña
> Assignee: Sergio Peña
> Priority: Critical
> Attachments: HIVE-15199.1.patch, HIVE-15199.2.patch,
> HIVE-15199.3.patch, HIVE-15199.4.patch, HIVE-15199.5.patch,
> HIVE-15199.6.patch, HIVE-15199.7.patch, HIVE-15199.8.patch, HIVE-15199.9.patch
>
>
> Any INSERT INTO statement run on S3 tables and when the scratch directory is
> saved on S3 is deleting old rows of the table.
> {noformat}
> hive> set hive.blobstore.use.blobstore.as.scratchdir=true;
> hive> create table t1 (id int, name string) location 's3a://spena-bucket/t1';
> hive> insert into table t1 values (1,'name1');
> hive> select * from t1;
> 1 name1
> hive> insert into table t1 values (2,'name2');
> hive> select * from t1;
> 2 name2
> {noformat}
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)