[
https://issues.apache.org/jira/browse/HIVE-10165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15156764#comment-15156764
]
Elliot West commented on HIVE-10165:
------------------------------------
Thank you for your edits. Regarding your points:
* Yes, 'on' is correct.
* I believe the correct phrase should be: 'Ingest for the respective group
(partitionValues + bucketId) must be halted and failed records resubmitted to
preserve sequence.'
* Settings section:
** Agreed re: 'transactions'
** I think the list is out of date perhaps. I believe it may be better to
simply remove points a-d and expect users to click through to the canonical
configuration listed at the location described by the link in point 4.
Thank you for your time and efforts!
> Improve hive-hcatalog-streaming extensibility and support updates and deletes.
> ------------------------------------------------------------------------------
>
> Key: HIVE-10165
> URL: https://issues.apache.org/jira/browse/HIVE-10165
> Project: Hive
> Issue Type: Improvement
> Components: HCatalog
> Affects Versions: 1.2.0
> Reporter: Elliot West
> Assignee: Elliot West
> Labels: TODOC2.0, streaming_api
> Fix For: 2.0.0
>
> Attachments: HIVE-10165.0.patch, HIVE-10165.10.patch,
> HIVE-10165.4.patch, HIVE-10165.5.patch, HIVE-10165.6.patch,
> HIVE-10165.7.patch, HIVE-10165.9.patch, mutate-system-overview.png
>
>
> h3. Overview
> I'd like to extend the
> [hive-hcatalog-streaming|https://cwiki.apache.org/confluence/display/Hive/Streaming+Data+Ingest]
> API so that it also supports the writing of record updates and deletes in
> addition to the already supported inserts.
> h3. Motivation
> We have many Hadoop processes outside of Hive that merge changed facts into
> existing datasets. Traditionally we achieve this by: reading in a
> ground-truth dataset and a modified dataset, grouping by a key, sorting by a
> sequence and then applying a function to determine inserted, updated, and
> deleted rows. However, in our current scheme we must rewrite all partitions
> that may potentially contain changes. In practice the number of mutated
> records is very small when compared with the records contained in a
> partition. This approach results in a number of operational issues:
> * Excessive amount of write activity required for small data changes.
> * Downstream applications cannot robustly read these datasets while they are
> being updated.
> * Due to scale of the updates (hundreds or partitions) the scope for
> contention is high.
> I believe we can address this problem by instead writing only the changed
> records to a Hive transactional table. This should drastically reduce the
> amount of data that we need to write and also provide a means for managing
> concurrent access to the data. Our existing merge processes can read and
> retain each record's {{ROW_ID}}/{{RecordIdentifier}} and pass this through to
> an updated form of the hive-hcatalog-streaming API which will then have the
> required data to perform an update or insert in a transactional manner.
> h3. Benefits
> * Enables the creation of large-scale dataset merge processes
> * Opens up Hive transactional functionality in an accessible manner to
> processes that operate outside of Hive.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)