[ 
https://issues.apache.org/jira/browse/FLINK-8866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16524015#comment-16524015
 ] 

ASF GitHub Bot commented on FLINK-8866:
---------------------------------------

Github user twalthr commented on a diff in the pull request:

    https://github.com/apache/flink/pull/6201#discussion_r198223665
  
    --- Diff: 
flink-libraries/flink-sql-client/src/main/java/org/apache/flink/table/client/gateway/local/LocalExecutor.java
 ---
    @@ -321,6 +327,18 @@ public void stop(SessionContext session) {
                }
        }
     
    +   private <T> void executeUpdateInternal(ExecutionContext<T> context, 
String query) {
    +           final ExecutionContext.EnvironmentInstance envInst = 
context.createEnvironmentInstance();
    +
    +           envInst.getTableEnvironment().sqlUpdate(query);
    +           // create job graph with dependencies
    +           final String jobName = context.getSessionContext().getName() + 
": " + query;
    +           final JobGraph jobGraph = envInst.createJobGraph(jobName);
    +
    +           // create execution
    +           new Thread(new ProgramDeployer<>(context, jobName, jobGraph, 
null)).start();
    --- End diff --
    
    I think even a detached job needs to return a result. Otherwise you cannot 
be sure if the job has been submitted or not. E.g., the cluster might not be 
reachable. In any case, every created thread should be managed by the result 
store. So we should have a similar architecture as for queries. Maybe instead 
of `CollectStreamResult` a `StatusResult`. Maybe we should do the SQL Client 
changes in a separate PR?


> Create unified interfaces to configure and instatiate TableSinks
> ----------------------------------------------------------------
>
>                 Key: FLINK-8866
>                 URL: https://issues.apache.org/jira/browse/FLINK-8866
>             Project: Flink
>          Issue Type: New Feature
>          Components: Table API &amp; SQL
>            Reporter: Timo Walther
>            Assignee: Shuyi Chen
>            Priority: Major
>              Labels: pull-request-available
>
> Similar to the efforts done in FLINK-8240. We need unified ways to configure 
> and instantiate TableSinks. Among other applications, this is necessary in 
> order to declare table sinks in an environment file of the SQL client. Such 
> that the sink can be used for {{INSERT INTO}} statements.
> Below are a few major changes in mind. 
> 1) Add TableSinkFactory/TableSinkFactoryService similar to 
> TableSourceFactory/TableSourceFactoryService
> 2) Add a common property called "type" with values (source, sink and both) 
> for both TableSource and TableSink.
> 3) in yaml file, replace "sources" with "tables", and use tableType to 
> identify whether it's source or sink.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to