Github user suez1224 commented on a diff in the pull request:

    https://github.com/apache/flink/pull/6201#discussion_r199065915
  
    --- Diff: 
flink-libraries/flink-sql-client/src/main/java/org/apache/flink/table/client/gateway/local/LocalExecutor.java
 ---
    @@ -321,6 +327,18 @@ public void stop(SessionContext session) {
                }
        }
     
    +   private <T> void executeUpdateInternal(ExecutionContext<T> context, 
String query) {
    +           final ExecutionContext.EnvironmentInstance envInst = 
context.createEnvironmentInstance();
    +
    +           envInst.getTableEnvironment().sqlUpdate(query);
    +           // create job graph with dependencies
    +           final String jobName = context.getSessionContext().getName() + 
": " + query;
    +           final JobGraph jobGraph = envInst.createJobGraph(jobName);
    +
    +           // create execution
    +           new Thread(new ProgramDeployer<>(context, jobName, jobGraph, 
null)).start();
    --- End diff --
    
    @twalthr, for sink only table, I dont think the user need to define any 
rowtimes on it, since it will never use as a source. For table as both source 
and sink, when registering it as sink, I think we only need to take care of the 
'from-field' columns, since they map to actual data fields in the table. For 
`proctime` and 'from-source' columns, we can just ignore them when building the 
sink schema. Maybe, we should have some helper method for building the schema 
for source and sink separately. Please correct me if I missed something here. 
What do you think?


---

Reply via email to