[ 
https://issues.apache.org/jira/browse/SPARK-13139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reynold Xin updated SPARK-13139:
--------------------------------
    Description: 
We currently delegate most DDLs directly to Hive, through NativePlaceholder in 
HiveQl.scala. In Spark 2.0, we want to provide native implementations for DDLs 
for both SQLContext and HiveContext.

The first step is to properly parse these DDLs, and then create logical 
commands that encapsulate them. The actual implementation can still delegate to 
HiveNativeCommand. As an example, we should define a command for RenameTable 
with the proper fields, and just delegate the implementation to 
HiveNativeCommand (we might need to track the original sql query in order to 
run HiveNativeCommand, but we can remove the sql query in the future once we do 
the next step).

Once we flush out the internal persistent catalog API, we can then switch the 
implementation of these newly added commands to use the catalog API.


  was:
We currently delegate most DDLs directly to Hive, through NativePlaceholder in 
HiveQl.scala. In Spark 2.0, we want to provide native implementations for DDLs 
for both SQLContext and HiveContext.

The first step is to properly parse these DDLs, and then create logical 
commands that encapsulate them. The actual implementation can still delegate to 
HiveNativeCommand.

Once we flush out the internal persistent catalog API, we can then switch the 
implementation of these newly added commands to use the catalog API.



> Create native DDL commands
> --------------------------
>
>                 Key: SPARK-13139
>                 URL: https://issues.apache.org/jira/browse/SPARK-13139
>             Project: Spark
>          Issue Type: Improvement
>          Components: SQL
>            Reporter: Reynold Xin
>
> We currently delegate most DDLs directly to Hive, through NativePlaceholder 
> in HiveQl.scala. In Spark 2.0, we want to provide native implementations for 
> DDLs for both SQLContext and HiveContext.
> The first step is to properly parse these DDLs, and then create logical 
> commands that encapsulate them. The actual implementation can still delegate 
> to HiveNativeCommand. As an example, we should define a command for 
> RenameTable with the proper fields, and just delegate the implementation to 
> HiveNativeCommand (we might need to track the original sql query in order to 
> run HiveNativeCommand, but we can remove the sql query in the future once we 
> do the next step).
> Once we flush out the internal persistent catalog API, we can then switch the 
> implementation of these newly added commands to use the catalog API.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to