[
https://issues.apache.org/jira/browse/HIVE-15571?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15986463#comment-15986463
]
Jesus Camacho Rodriguez commented on HIVE-15571:
------------------------------------------------
[~nishantbangarwa], I have been checking the patch.
If I understand correctly the approach you are following, we might face
problems when there is more than one INSERT INTO statement executed in parallel
(allowed in Hive since we are just appending data to the table), as the
implementation seems to rely on having a lock on the table, e.g., extracting
the list of existing segments in pre-insert and then using it throughout the
process. Is that correct?
I have a question. Is it feasible to create and register new segments for the
new data when we execute INSERT INTO, relying on Druid to then merge (at some
point) small segments into larger/more optimal ones? I guess something similar
is done with the realtime ingestion and reconciliation with previously stored
segments?
> Support Insert into for druid storage handler
> ---------------------------------------------
>
> Key: HIVE-15571
> URL: https://issues.apache.org/jira/browse/HIVE-15571
> Project: Hive
> Issue Type: New Feature
> Components: Druid integration
> Reporter: slim bouguerra
> Assignee: Nishant Bangarwa
> Attachments: HIVE-15571.01.patch
>
>
> Add support of inset into operator for druid storage handler.
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)