Hi,

> 流模式下和批模式下,写入mysql都是自动选用UpsertStreamTableSink吗?
是因为flink-jdbc目前只提供了 UpsertStreamTableSink 的实现。在如果是 StreamTableSink
或者说 AppendStreamTableSink 的话,批模式是支持的。


> 另外,如果基于批模式,是否目前没有可以执行insert into t_user_target
values(1,'fan'),(2,'ss')的sink。
当然有很多 sink 支持啊,比如 hive sink,filesystem sink 等等。

Best,
Jark


On Tue, 25 Feb 2020 at 18:20, 猫猫 <[email protected]> wrote:

> 感谢,还有一个疑问
>
>
> 流模式下和批模式下,写入mysql都是自动选用UpsertStreamTableSink吗?
> 我改为流模式运行,数据正确写入了,这两种模式在运行时有什么区别?
>
>
> 另外,如果基于批模式,是否目前没有可以执行insert into t_user_target
> values(1,'fan'),(2,'ss')的sink。
> 或者说,虽然存在批模式的sink,但是通过sql语句无法显示指定对应的sink。
>
>
> ------------------&nbsp;原始邮件&nbsp;------------------
> 发件人:&nbsp;"Jark Wu"<[email protected]&gt;;
> 发送时间:&nbsp;2020年2月25日(星期二) 晚上6:11
> 收件人:&nbsp;"user-zh"<[email protected]&gt;;
>
> 主题:&nbsp;Re: 使用flink-jdbc-driver写入mysql时失败(flink1.10.0)
>
>
>
> Hi,
>
> 当前 batch 模式还不支持 UpsertTableSink,不过已经有 PR 在支持中了:
> https://issues.apache.org/jira/browse/FLINK-15579
>
> Best,
> Jark
>
> On Tue, 25 Feb 2020 at 11:13, 猫猫 <[email protected]&gt; wrote:
>
> &gt;
> 意图:通过jdbc访问gateway,做一个mysql表写入的测试。通过jdbc-driver创建mysql表成功,但执行写入数据时失败。SQL-gateway采用默认配置。但语句直接在sqlclient中执行可以成功。想请教一下,是定义不正确?还是环境配置不正确。正常来说不是应该默认使用tablesink吗?错误提示如下:Caused
> &gt; by: org.apache.flink.table.api.TableException: RetractStreamTableSink
> and
> &gt; UpsertStreamTableSink is not supported in Batch
> environment.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at
> &gt;
> org.apache.flink.table.planner.plan.nodes.physical.batch.BatchExecSink.translateToPlanInternal(BatchExecSink.scala:85)
> &gt;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at
> &gt;
> org.apache.flink.table.planner.plan.nodes.physical.batch.BatchExecSink.translateToPlanInternal(BatchExecSink.scala:48)
> &gt;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at
> &gt;
> org.apache.flink.table.planner.plan.nodes.exec.ExecNode$class.translateToPlan(ExecNode.scala:58)
> &gt;&nbsp;&nbsp;&nbsp; at
> &gt;
> org.apache.flink.table.planner.plan.nodes.physical.batch.BatchExecSink.translateToPlan(BatchExecSink.scala:48)
> &gt;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at
> &gt;
> org.apache.flink.table.planner.delegation.BatchPlanner$$anonfun$translateToPlan$1.apply(BatchPlanner.scala:69)
> &gt;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at
> &gt;
> org.apache.flink.table.planner.delegation.BatchPlanner$$anonfun$translateToPlan$1.apply(BatchPlanner.scala:68)
> &gt; 源代码如下:Connection connection =
> &gt;
> DriverManager.getConnection("jdbc:flink://dataflow1:8083?planner=blink");
> &gt; Statement statement = connection.createStatement();
> &gt; sql = "CREATE TABLE t_user_target (\n" +
> &gt;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; "&nbsp; id BIGINT,\n"
> +
> &gt;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; "&nbsp; username
> VARCHAR\n" +
> &gt;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; ") WITH (\n" +
> &gt;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; "&nbsp;
> 'connector.type' = 'jdbc',\n" +
> &gt;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; "&nbsp;
> 'connector.driver' = 'com.mysql.jdbc.Driver',\n" +
> &gt;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; "&nbsp;
> 'connector.url' = 'jdbc:mysql://172.18.100.85:3306/targetdb',\n"
> &gt; +
> &gt;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; "&nbsp;
> 'connector.table' = 't1target',\n" +
> &gt;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; "&nbsp;
> 'connector.username' = 'root',\n" +
> &gt;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; "&nbsp;
> 'connector.password' = 'root',\n" +
> &gt;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; "&nbsp;
> 'connector.write.flush.max-rows' = '5000'\n" +
> &gt;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; ");";
> &gt; statement.executeUpdate(sql);
> &gt; statement.execute("insert into t_user_target
> values(1,'fan'),(2,'ss')");

回复