vinishjail97 opened a new issue, #590:
URL: https://github.com/apache/incubator-xtable/issues/590

   ### Feature Request / Improvement
   
   ### Context
   Users of Apache XTable (Incubating) today can translate metadata across 
table formats (iceberg, hudi, and delta) and use the tables in different 
platforms depending on their choice. Today there's still some friction involved 
in terms of usability because users need to explicitly 
[register](https://xtable.apache.org/docs/catalogs-index) the tables in the 
catalog of their choice (glue, HMS, unity, bigLake etc.) and then use the 
catalog in the platform of their choice to do DDL, DML queries. 
   
   XTable is built on the principle of omni directional interoperability and 
I'm proposing an interface which allows syncing metadata of table formats to 
multiple catalogs in a continuous and incremental manner. 
   
   ### Why do we need this feature ?
   1. Reduce friction for XTable users - XTable sync will register the tables 
in the catalogs of their choice after metadata generation. If users are using a 
single format, they can still use XTable to sync the metadata across multiple 
catalogs. 
   2. Avoid catalog lock-in - There's no reason why data/metadata in storage 
should be registered in a single catalog, users can register the table across 
multiple catalogs depending on the use-case, ecosystem and features provided by 
the catalog.
   
   ### Implementation
   I have submitted a PR with the interface for `CatalogSyncClient` and 
`CatalogSyncOperations`, open to the feedback on the feature request. 
   
   ### Are you willing to submit PR?
   
   - [X] Yes I am willing to submit a PR!
   
   ### Code of Conduct
   
   - [X] I agree to follow this project's [Code of 
Conduct](https://www.apache.org/foundation/policies/conduct)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to