[
https://issues.apache.org/jira/browse/HIVE-23520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17112425#comment-17112425
]
Gopal Vijayaraghavan commented on HIVE-23520:
---------------------------------------------
bq. not sure about the use case, can you explain in detail please.
This is being used to update bootstrap reference the 30Tb data-sets, for
replication/data-sharing across.
Since doing this with a COPY will end up costing us real dollars at the 30Tb
per cluster to make a full snapshot everytime we modify a single table, we're
trying to produce a files.list of files which is in-place & will take less than
5 minutes to generate a snapshot, not several hours.
The tables are immutable - so we make no data updates from start of insert to
repl dump/repl load.
The incremental doesn't work for that (because what we want is a single
bootstrap REPL LOAD on the remote).
So we're not using this to replicate changes, but to load a reference data-set
when we start a new cluster.
> REPL: repl dump could add support for immutable dataset
> -------------------------------------------------------
>
> Key: HIVE-23520
> URL: https://issues.apache.org/jira/browse/HIVE-23520
> Project: Hive
> Issue Type: Improvement
> Reporter: Rajesh Balamohan
> Assignee: Rajesh Balamohan
> Priority: Minor
> Attachments: HIVE-23520.1.patch
>
>
> Currently, "REPL DUMP" ends up copying entire dataset along with partition
> information, stats etc in its dump folder. However, there are cases (e.g
> large reference datasets), where we need a way to just retain metadata along
> with partition information & stats.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)