[
https://issues.apache.org/jira/browse/HIVE-17990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16283986#comment-16283986
]
Peter Vary commented on HIVE-17990:
-----------------------------------
I might have misunderstood the intenet. I thought, that any table created in
Hive will be immediately readable through Schema Registry, and data stored with
a Schema would be readable from Hive, thus requiring all fields filled, and
adding data. If this is not the case, then you are right, this is not a valid
issue.
If I understand the diagrams correctly, then in the final version we could
access the column information through the Schema objects only. Currently
theoretically every partition can have a different column set, so when we read
the partitions of a table, we transfer these data for all of the paritions
again and again. If we add extra objects to this hierarchy, then we end up
transfering those multiple times too. I might missing here something too, so
feel free to correct me if I wrong.
Missed the tests - they have obvious name, so I do not know why :( Thanks for
pointing them out.
> Add Thrift and DB storage for Schema Registry objects
> -----------------------------------------------------
>
> Key: HIVE-17990
> URL: https://issues.apache.org/jira/browse/HIVE-17990
> Project: Hive
> Issue Type: Sub-task
> Components: Standalone Metastore
> Reporter: Alan Gates
> Assignee: Alan Gates
> Attachments: Adding-Schema-Registry-to-Metastore.pdf
>
>
> This JIRA tracks changes to Thrift, RawStore, and DB scripts to support
> objects in the Schema Registry.
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)