[
https://issues.apache.org/jira/browse/FLINK-23834?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17407058#comment-17407058
]
Yun Gao commented on FLINK-23834:
---------------------------------
I tested with three jobs:
#
[BlockedDataStreamSourcesJob|https://github.com/gaoyunhaii/flink1.14test/blob/master/src/main/java/mixed/BlockedDataStreamSourcesJob.java]:
A job with two datastream bounded sources, then it is converted to sql and a
part of the graph is converted to datastream again.
#
[BlockedSQLSourcesJob|https://github.com/gaoyunhaii/flink1.14test/blob/master/src/main/java/mixed/BlockedSQLSourcesJob.java]:
A job with two sql bounded sources, then it is converted to datastream and a
part of the graph is converted to sql again.
#
[MixBlockedSourcesJob|https://github.com/gaoyunhaii/flink1.14test/blob/master/src/main/java/mixed/MixBlockedSourcesJob.java]:
A job with one sql bounded source and one datastream bounded source, then they
are followed by a part of datastream subgraph and a part of sql subgraph.
And each job specify the batch mode with two methods:
{code:java}
StreamExecutionEnvironment env =
StreamExecutionEnvironment.getExecutionEnvironment();
env.setParallelism(4);
env.setRuntimeMode(RuntimeExecutionMode.BATCH);
StreamTableEnvironment tableEnv = StreamTableEnvironment.create(env);
{code}
and
{code:java}
StreamExecutionEnvironment env =
StreamExecutionEnvironment.getExecutionEnvironment();
StreamTableEnvironment tableEnv =
StreamTableEnvironment.create(env,
EnvironmentSettings.inBatchMode());
{code}
All the jobs could run normally. From the timelines attached in the jira, we
could see that for each job, their job vertex are executed one after another,
which indicates that the job is indeed running in the batch mode.
There is one issue that should be independent from the functionality we test
here: when I develop the part with datastream api in batch mode, I meet the
issue [FLINK-22587 |https://issues.apache.org/jira/browse/FLINK-22587] again
when trying to use _GlobalWindow_ with the Datastream API (operators like
_join_ require to specify a window), I forget the issue initially, thus it
takes a bit of time to debug. The current solution is to assign a 0L timestamp
to each record and use a TumbleEventTimeWindow. Perhaps we may need some more
think on this issue in the future.
> Test StreamTableEnvironment batch mode manually
> -----------------------------------------------
>
> Key: FLINK-23834
> URL: https://issues.apache.org/jira/browse/FLINK-23834
> Project: Flink
> Issue Type: Improvement
> Components: Table SQL / API
> Reporter: Timo Walther
> Assignee: Yun Gao
> Priority: Blocker
> Labels: release-testing
> Fix For: 1.14.0
>
> Attachments: job_1.png, job_2.png, job_3.png
>
>
> Test a program that mixes DataStream API and Table API batch mode. Including
> some connectors.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)