[ 
https://issues.apache.org/jira/browse/HUDI-1842?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17390762#comment-17390762
 ] 

sivabalan narayanan commented on HUDI-1842:
-------------------------------------------

[~pzw2018]: Am trying to understand how this flow might look like. 

Lets say I have a hudi table created w/ spark datasource. 

And then, if I wish to do some dml in sql, what are the steps to be done. 
 # We do create table in sql dml with all the right properties, with location 
pointing to an already existing hudi table (created via spark datasource). 
 # Once step 1 is done, we can start reading data from this table is it from 
sql ?  

I guess only pre-requisite here is that, we need to run an upgrade step where 
in, we set some additional properties into existing hoodie.properties file for 
this table of interest before step 1 (key gen, partition columns and precombine 
field). 

Is my understanding right? 

> [SQL] Spark Sql Support For The Exists Hoodie Table
> ---------------------------------------------------
>
>                 Key: HUDI-1842
>                 URL: https://issues.apache.org/jira/browse/HUDI-1842
>             Project: Apache Hudi
>          Issue Type: Sub-task
>            Reporter: pengzhiwei
>            Priority: Blocker
>              Labels: release-blocker
>             Fix For: 0.9.0
>
>
> In order to support spark sql for hoodie, we persist some table properties to 
> the hoodie.properties. e.g. primaryKey, preCombineField, partition columns.  
> For the exists hoodie tables, these  properties are missing. We need do some 
> code in UpgradeDowngrade to support spark sql for the exists tables.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to