morningman commented on a change in pull request #3819:
URL: https://github.com/apache/incubator-doris/pull/3819#discussion_r438219946
##########
File path: fe/src/main/cup/sql_parser.cup
##########
@@ -1244,6 +1244,15 @@ data_desc ::=
RESULT = new DataDescription(tableName, partitionNames, files,
colList, colSep, fileFormat,
columnsFromPath, isNeg, colMappingList, whereExpr);
:}
+ | KW_DATA KW_FROM KW_TABLE ident:srcTableName
+ opt_negative:isNeg
+ KW_INTO KW_TABLE ident:tableName
+ opt_partition_names:partitionNames
+ opt_col_mapping_list:colMappingList
Review comment:
How to map the hive table's columns to olap table's columns?
What if column's name is same in two tables?
How about reference to the
[DeltaLake](https://docs.microsoft.com/en-us/azure/databricks/spark/latest/spark-sql/language-manual/copy-into)
`COPY INTO` stmt by using a `SELECT` statement?
```
DATA AS (SELECT xxx FROM hive_table WHERE xxx)
INTO TABLE olap_table
PARTITION(p1, p2, ...)
(k1, k2, k3, v1, v2) /* indicate the columns of olap table which will be
loaded */
```
First, SQL is more flexible, and can be easily used by Spark to read from a
hive table.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]