Github user twalthr commented on a diff in the pull request:

    https://github.com/apache/flink/pull/6201#discussion_r198223665
  
    --- Diff: 
flink-libraries/flink-sql-client/src/main/java/org/apache/flink/table/client/gateway/local/LocalExecutor.java
 ---
    @@ -321,6 +327,18 @@ public void stop(SessionContext session) {
                }
        }
     
    +   private <T> void executeUpdateInternal(ExecutionContext<T> context, 
String query) {
    +           final ExecutionContext.EnvironmentInstance envInst = 
context.createEnvironmentInstance();
    +
    +           envInst.getTableEnvironment().sqlUpdate(query);
    +           // create job graph with dependencies
    +           final String jobName = context.getSessionContext().getName() + 
": " + query;
    +           final JobGraph jobGraph = envInst.createJobGraph(jobName);
    +
    +           // create execution
    +           new Thread(new ProgramDeployer<>(context, jobName, jobGraph, 
null)).start();
    --- End diff --
    
    I think even a detached job needs to return a result. Otherwise you cannot 
be sure if the job has been submitted or not. E.g., the cluster might not be 
reachable. In any case, every created thread should be managed by the result 
store. So we should have a similar architecture as for queries. Maybe instead 
of `CollectStreamResult` a `StatusResult`. Maybe we should do the SQL Client 
changes in a separate PR?


---

Reply via email to