Hi, Kaibo Zhou ~

There are several phrases that a SQL text get to execution graph what can be 
run with Flink runtime:


1. Sql Parse: parse the sql text to AST(sql node tree)
2. Sql node(row type) validation, this includes the tables/schema inference
3. Sql-to-rel conversion, convert the sql node to RelNode(relational algebra)
4. Promote the relational expression with planner(Volcano or Hep) then converts 
to execution convention nodes
5. Genegate the code and the execution graph

For the first 3 steps, Apache Flink uses the Apache Calcite as the 
implementation, that means a SQL test passed to table environment would always 
have a SQL parse/validation/sql-to-rel conversion.

For example, a code snippet like tableEnv.sqlQuery("INSERT INTO sinkTable 
SELECT f1,f2 FROM sourceTable”), the query part “SELECT f1,f2 FROM sourceTable” 
was validated.

But you are right, for Flink SQL, an insert statement target table is not 
validated during the validation phrase, actually we validate the “select” 
clause first, extract the target table identifier and we validate the schema of 
“select” clause and target table are the same when we invoke write to 
sink(after step 4).


For most of the cases this is okey, can you share your cases ? What kind of 
validation do you want for the insert target table ?

We are planning to include the insert target table validation in the step2 for 
2 reasons:

• The computed column validation(stored or virtual)
• The insert implicit type coercion

But this would comes for Flink version 1.11 ~


Best,
Danny Chan
在 2019年12月27日 +0800 PM5:44,dev@flink.apache.org,写道:
>
> "INSERT INTO
> sinkTable SELECT f1,f2 FROM sourceTable"

Reply via email to