Timo Walther created FLINK-22880:
Summary: Remove "blink" term in code base
Key: FLINK-22880
URL: https://issues.apache.org/jira/browse/FLINK-22880
Project: Flink
Issue Type
Timo Walther created FLINK-22879:
Summary: Remove "blink" suffix from table modules
Key: FLINK-22879
URL: https://issues.apache.org/jira/browse/FLINK-22879
Project: Flink
Issue
Timo Walther created FLINK-22878:
Summary: Allow placeholder options in format factories
Key: FLINK-22878
URL: https://issues.apache.org/jira/browse/FLINK-22878
Project: Flink
Issue Type
Timo Walther created FLINK-22877:
Summary: Remove BatchTableEnvironment and related API classes
Key: FLINK-22877
URL: https://issues.apache.org/jira/browse/FLINK-22877
Project: Flink
Issue
Timo Walther created FLINK-22872:
Summary: Remove usages of legacy planner test utilities in Python
Key: FLINK-22872
URL: https://issues.apache.org/jira/browse/FLINK-22872
Project: Flink
Timo Walther created FLINK-22864:
Summary: Remove the legacy planner code base
Key: FLINK-22864
URL: https://issues.apache.org/jira/browse/FLINK-22864
Project: Flink
Issue Type: Sub-task
Timo Walther created FLINK-22857:
Summary: Add possibility to call built-in functions in
SpecializedFunction
Key: FLINK-22857
URL: https://issues.apache.org/jira/browse/FLINK-22857
Project: Flink
Timo Walther created FLINK-22849:
Summary: Drop remaining usages of legacy planner in E2E tests and
Python
Key: FLINK-22849
URL: https://issues.apache.org/jira/browse/FLINK-22849
Project: Flink
Timo Walther created FLINK-22832:
Summary: Drop usages of legacy planner in SQL Client
Key: FLINK-22832
URL: https://issues.apache.org/jira/browse/FLINK-22832
Project: Flink
Issue Type: Sub
Timo Walther created FLINK-22831:
Summary: Drop usages of legacy planner in Scala shell
Key: FLINK-22831
URL: https://issues.apache.org/jira/browse/FLINK-22831
Project: Flink
Issue Type: Sub
Timo Walther created FLINK-22829:
Summary: Drop usages of legacy planner in Hbase modules
Key: FLINK-22829
URL: https://issues.apache.org/jira/browse/FLINK-22829
Project: Flink
Issue Type
Timo Walther created FLINK-22824:
Summary: Drop usages of legacy planner in Kafka modules
Key: FLINK-22824
URL: https://issues.apache.org/jira/browse/FLINK-22824
Project: Flink
Issue Type
Timo Walther created FLINK-22822:
Summary: Drop usages of legacy planner in JDBC module
Key: FLINK-22822
URL: https://issues.apache.org/jira/browse/FLINK-22822
Project: Flink
Issue Type: Sub
Timo Walther created FLINK-22813:
Summary: Drop usages of legacy planner in Hive module
Key: FLINK-22813
URL: https://issues.apache.org/jira/browse/FLINK-22813
Project: Flink
Issue Type: Sub
Timo Walther created FLINK-22811:
Summary: Drop usages of legacy planner in Avro module
Key: FLINK-22811
URL: https://issues.apache.org/jira/browse/FLINK-22811
Project: Flink
Issue Type: Sub
Timo Walther created FLINK-22810:
Summary: Drop usages of flink-table-planner in Elasticsearch
modules
Key: FLINK-22810
URL: https://issues.apache.org/jira/browse/FLINK-22810
Project: Flink
Timo Walther created FLINK-22782:
Summary: Remove legacy planner from Chinese docs
Key: FLINK-22782
URL: https://issues.apache.org/jira/browse/FLINK-22782
Project: Flink
Issue Type: Sub-task
Timo Walther created FLINK-22774:
Summary: Package flink-sql-connector-kinesis with newer Guava
version
Key: FLINK-22774
URL: https://issues.apache.org/jira/browse/FLINK-22774
Project: Flink
Timo Walther created FLINK-22748:
Summary: Allow dynamic target topic selection in Kafka sinks
Key: FLINK-22748
URL: https://issues.apache.org/jira/browse/FLINK-22748
Project: Flink
Issue
Timo Walther created FLINK-22747:
Summary: Update commons-io to 2.8
Key: FLINK-22747
URL: https://issues.apache.org/jira/browse/FLINK-22747
Project: Flink
Issue Type: Improvement
Hi Konstantin,
thanks for starting this discussion. I was also about to provide some
feedback because I have the feeling that the bot is too aggressive at
the moment.
Even a 14 days interval is a short period of time for bigger efforts
that might include several subtasks. Currently, if we sp
Timo Walther created FLINK-22744:
Summary: Simplify TableEnvironment.create()
Key: FLINK-22744
URL: https://issues.apache.org/jira/browse/FLINK-22744
Project: Flink
Issue Type: Sub-task
Timo Walther created FLINK-22740:
Summary: Drop legacy planner from docs
Key: FLINK-22740
URL: https://issues.apache.org/jira/browse/FLINK-22740
Project: Flink
Issue Type: Sub-task
Timo Walther created FLINK-22709:
Summary: Drop usages of EnvironmentSettings.useOldPlanner()
Key: FLINK-22709
URL: https://issues.apache.org/jira/browse/FLINK-22709
Project: Flink
Issue
Timo Walther created FLINK-22697:
Summary: Clean up examples to not use legacy planner anymore
Key: FLINK-22697
URL: https://issues.apache.org/jira/browse/FLINK-22697
Project: Flink
Issue
Hi Konstantin,
thanks for starting the discussion. From the Table API side, we also
fixed a couple of critical issues already that justify releasing a
1.13.1 asap.
Personally, I would like to include
https://issues.apache.org/jira/browse/FLINK-22666 that fixes some last
issues with the Scal
Timo Walther created FLINK-22666:
Summary: Add more test cases for brigding to Scala DataStream API
Key: FLINK-22666
URL: https://issues.apache.org/jira/browse/FLINK-22666
Project: Flink
Timo Walther created FLINK-22623:
Summary: Drop BatchTableSource HBaseTableSource
Key: FLINK-22623
URL: https://issues.apache.org/jira/browse/FLINK-22623
Project: Flink
Issue Type: Sub-task
Timo Walther created FLINK-22622:
Summary: Drop BatchTableSource ParquetTableSource
Key: FLINK-22622
URL: https://issues.apache.org/jira/browse/FLINK-22622
Project: Flink
Issue Type: Sub
Timo Walther created FLINK-22620:
Summary: Uncouple OrcTableSource from legacy planner
Key: FLINK-22620
URL: https://issues.apache.org/jira/browse/FLINK-22620
Project: Flink
Issue Type: Sub
Timo Walther created FLINK-22619:
Summary: Drop usages of BatchTableEnvironment
Key: FLINK-22619
URL: https://issues.apache.org/jira/browse/FLINK-22619
Project: Flink
Issue Type: Sub-task
Timo Walther created FLINK-22590:
Summary: Add Scala implicit conversions for new API methods
Key: FLINK-22590
URL: https://issues.apache.org/jira/browse/FLINK-22590
Project: Flink
Issue
Timo Walther created FLINK-22575:
Summary: Offer a more recent flink-shaded-hadoop-2-uber
Key: FLINK-22575
URL: https://issues.apache.org/jira/browse/FLINK-22575
Project: Flink
Issue Type
Timo Walther created FLINK-22537:
Summary: Add documentation how to interact with DataStream API
Key: FLINK-22537
URL: https://issues.apache.org/jira/browse/FLINK-22537
Project: Flink
Issue
Timo Walther created FLINK-22426:
Summary: Reduce usage of TableSchema in the planner
Key: FLINK-22426
URL: https://issues.apache.org/jira/browse/FLINK-22426
Project: Flink
Issue Type: Sub
re is a new RC or 1.13.1 otherwise. Of course if there are no
objections from others.
Best,
Dawid
On 21/04/2021 10:52, Timo Walther wrote:
> Hi everyone,
>
> sorry for being so late with this request, but fixing a couple of
down
> stream bugs
Hi everyone,
sorry for being so late with this request, but fixing a couple of down
stream bugs had higher priority than this issue and were also blocking
it. Nevertheless, I would like to ask for permission to merge the
FLINK-19980[1] to the 1.13 branch as an experimental feature to add the
Timo Walther created FLINK-22378:
Summary: Type mismatch when declaring SOURCE_WATERMARK on
TIMESTAMP_LTZ column
Key: FLINK-22378
URL: https://issues.apache.org/jira/browse/FLINK-22378
Project: Flink
Timo Walther created FLINK-22321:
Summary: Drop old user-defined function stack
Key: FLINK-22321
URL: https://issues.apache.org/jira/browse/FLINK-22321
Project: Flink
Issue Type: Sub-task
Timo Walther created FLINK-22151:
Summary: Implement type inference for agg functions
Key: FLINK-22151
URL: https://issues.apache.org/jira/browse/FLINK-22151
Project: Flink
Issue Type: Sub
Timo Walther created FLINK-22138:
Summary: Better support structured types as toDataStream output
Key: FLINK-22138
URL: https://issues.apache.org/jira/browse/FLINK-22138
Project: Flink
Issue
Hi everyone,
I support Leonard's request. It was foreseeable that the changes of
FLIP-162 will be massive and will take some time. By looking at PRs such as
https://github.com/apache/flink/pull/15280
I would also vote for giving a bit more time for proper reviews and
finalizing this story fo
Timo Walther created FLINK-21989:
Summary: Add a SupportsSourceWatermark ability interface
Key: FLINK-21989
URL: https://issues.apache.org/jira/browse/FLINK-21989
Project: Flink
Issue Type
Timo Walther created FLINK-21934:
Summary: Add new StreamTableEnvironment.toDataStream
Key: FLINK-21934
URL: https://issues.apache.org/jira/browse/FLINK-21934
Project: Flink
Issue Type: Sub
Timo Walther created FLINK-21913:
Summary: Update DynamicTableFactory.Context to use
ResolvedCatalogTable
Key: FLINK-21913
URL: https://issues.apache.org/jira/browse/FLINK-21913
Project: Flink
Timo Walther created FLINK-21912:
Summary: Introduce Schema and ResolvedSchema in Python API
Key: FLINK-21912
URL: https://issues.apache.org/jira/browse/FLINK-21912
Project: Flink
Issue Type
Timo Walther created FLINK-21911:
Summary: Support arithmetic MIN/MAX in SQL
Key: FLINK-21911
URL: https://issues.apache.org/jira/browse/FLINK-21911
Project: Flink
Issue Type: Sub-task
Timo Walther created FLINK-21872:
Summary: Create a utility to create DataStream API's DataType and
Schema
Key: FLINK-21872
URL: https://issues.apache.org/jira/browse/FLINK-21872
Project:
Timo Walther created FLINK-21801:
Summary: Update use new schema in Table, TableResult,
TableOperation
Key: FLINK-21801
URL: https://issues.apache.org/jira/browse/FLINK-21801
Project: Flink
20:59, Leonard Xu wrote:
+1 for the roadmap.
Thanks Timo for driving this.
Best,
Leonard
在 2021年3月4日,20:40,Timo Walther 写道:
Last call for feedback on this topic.
It seems everyone agrees to finally complete FLIP-32. Since FLIP-32 has
been accepted for a very long time, I think we don
Timo Walther created FLINK-21709:
Summary: Officially deprecate the legacy planner
Key: FLINK-21709
URL: https://issues.apache.org/jira/browse/FLINK-21709
Project: Flink
Issue Type: Sub-task
Hi Leonard,
I'm fine with dropping the old buggy behavior immediatly. Users can
still implement a UDF with the old bavhior if needed. I hope the new
functions will be well-tested so that a fallback to the old functions is
not necessary as a workaround. It will definitely avoid confusion for
u
's better for us to be more
focused on a single planner.
Your proposed roadmap looks good to me, +1 from my side and thanks
again for all your efforts!
Best,
Kurt
On Thu, Feb 25, 2021 at 5:01 PM Timo Walther wrote:
Hi everyone,
since Flink 1.9 we have supported two SQL planners. Most of
Timo Walther created FLINK-21586:
Summary: Implement ResolvedExpression.asSerializableString for SQL
Key: FLINK-21586
URL: https://issues.apache.org/jira/browse/FLINK-21586
Project: Flink
+1 (binding)
Regards,
Timo
On 03.03.21 04:14, Jark Wu wrote:
+1 (binding)
Best,
Jark
On Tue, 2 Mar 2021 at 10:42, Leonard Xu wrote:
Hi all,
I would like to start the vote for FLIP-162 [1], which has been discussed
and
reached a consensus in the discussion thread [2].
Please vote +1 to ap
Kurt Young 于2021年3月1日周一 下午5:11写道:
I'm +1 for either:
1. introduce a sql client specific option, or
2. Introduce a table config option and make it apply to both table
module &
sql client.
It would be the FLIP owner's call to decide.
Best,
Kurt
On Mon, Mar 1, 2021 at 3:25 PM
Best,
Kurt
On Mon, Mar 1, 2021 at 3:58 PM Timo Walther wrote:
and btw it is interesting to notice that AWS seems to do the approach
that I suggested first.
All functions are SQL standard compliant, and only dedicated functions
with a prefix such as CURRENT_ROW_TIMESTAMP divert from the sta
I would vote -0 here.
I fear that we are creating potential silos where a team doesn't know
what is going on in the other teams.
Regards,
Timo
On 01.03.21 10:47, Jark Wu wrote:
I also have some concerns about splitting python and sql.
Because I have seen some SQL questions users reported bu
we will suffer
from such config option, because users can always make Flink SQL have
strange behavior by setting this config to an undesired way.
I prefer to not introduce such config until we have to. Leonard's proposal
already makes almost all users happy thus I think
we can still wait.
Best
and btw it is interesting to notice that AWS seems to do the approach
that I suggested first.
All functions are SQL standard compliant, and only dedicated functions
with a prefix such as CURRENT_ROW_TIMESTAMP divert from the standard.
Regards,
Timo
On 01.03.21 08:45, Timo Walther wrote:
How
ther systems and databases will
evaluate
these functions during query start. And for streaming users, I have
already seen
some users are expecting these functions to be calculated per record.
Thus I think we can make the behavior determined together with
execution
mode.
One exception would be PROCTIME(),
on.
I
don't think the drawback is really critical because many systems have
commands play the same role with the different names.
Best,
Shengkai
Timo Walther 于2021年2月25日周四 下午4:23写道:
The `table.` prefix is meant to be a general option in the table
ecosystem. Not necessarily attached to
Hi everyone,
since Flink 1.9 we have supported two SQL planners. Most of the original
plan of FLIP-32 [1] has been implemented. The Blink code merge has been
completed and many additional features have been added exclusively to
the new planner. The new planner is now in a much better shape tha
we might
cause if we introduce such a method of our own, I think it's better to wait
for some more
feedback.
Best,
Kurt
On Tue, Feb 23, 2021 at 9:45 PM Timo Walther wrote:
Hi Kurt,
we can also shorten it to `table.dml-sync` if that would help. Then it
would confuse users that do a regular
tax is SQL-client-specific, but if it's
general Flink SQL syntax we should consider this (one way or another).
Regards
Ingo
On Fri, Feb 12, 2021 at 3:53 PM Timo Walther
wrote:
Hi Shengkai,
thanks for updating the FLIP.
I have one last comment for the option `table.execution.mode`. Should
Timo Walther created FLINK-21435:
Summary: Add a SqlExpression in table-common
Key: FLINK-21435
URL: https://issues.apache.org/jira/browse/FLINK-21435
Project: Flink
Issue Type: Sub-task
+1 (binding)
Thanks,
Timo
On 22.02.21 04:44, Jark Wu wrote:
+1 (binding)
Best,
Jark
On Mon, 22 Feb 2021 at 11:06, Shengkai Fang wrote:
Hi devs
It seems we have reached consensus on FLIP-163[1] in the discussion[2]. So
I'd like to start the vote for this FLIP.
Please vote +1 to approve th
Timo Walther created FLINK-21396:
Summary: Update the Catalog APIs
Key: FLINK-21396
URL: https://issues.apache.org/jira/browse/FLINK-21396
Project: Flink
Issue Type: Sub-task
Timo Walther created FLINK-21395:
Summary: Implement Schema, ResolvedSchema, SchemaResolver
Key: FLINK-21395
URL: https://issues.apache.org/jira/browse/FLINK-21395
Project: Flink
Issue Type
Sorry, I forgot to add the [RESULT] label in the mail's subject.
I'm sending this mail to satisfy the process.
Regards,
Timo
On 18.02.21 12:02, Timo Walther wrote:
Hi everyone,
The voting time for FLIP-164: Improve Schema Handling in Catalogs [1]
has passed. I'm closi
Timo Walther created FLINK-21394:
Summary: FLIP-164: Improve Schema Handling in Catalogs
Key: FLINK-21394
URL: https://issues.apache.org/jira/browse/FLINK-21394
Project: Flink
Issue Type
21 at 8:25 AM Jark Wu wrote:
+1 (binding)
Best,
Jark
2021年2月12日 20:37,Dawid Wysakowicz 写道:
+1 (binding)
Best,
Dawid
On 12/02/2021 13:33, Timo Walther wrote:
Hi everyone,
I'd like to start a vote on FLIP-164 [1] which was discussed in [2].
The vote will be open for at least 72 hou
Timo Walther created FLINK-21392:
Summary: Support to block in connected streams
Key: FLINK-21392
URL: https://issues.apache.org/jira/browse/FLINK-21392
Project: Flink
Issue Type: New
I am fine with the new option name.
Best,
Shengkai
Timo Walther 于2021年2月9日 周二下午5:35写道:
Yes, `TableEnvironment#executeMultiSql()` can be future work.
@Rui, Shengkai: Are you also fine with this conclusion?
Thanks,
Timo
On 09.02.21 10:14, Jark Wu wrote:
I'm fine with `table.multi-dml-
Hi everyone,
I'd like to start a vote on FLIP-164 [1] which was discussed in [2].
The vote will be open for at least 72 hours. Unless there are any
objections, I'll close it by February 17th, 2021 (due to weekend) if we
have received sufficient votes.
[1]
https://cwiki.apache.org/confluence/
declaration.
Regards,
Timo
On 10.02.21 12:12, Timo Walther wrote:
Hi Jark,
I don't think many users use WatermarkSpec. UniqueConstraint could cause
some confusion but this mostly affects catalog or connector
implementers. After deprecating the old APIs it should be obvious when
an out
? If we want to introduce
a new stack, it would be better to have a different name, otherwise,
it's easy to use a wrong class for users.
Best,
Jark
On Wed, 10 Feb 2021 at 09:49, Rui Li wrote:
I see. Makes sense to me. Thanks Timo for the detailed explanation!
On Tue, Feb 9, 2021 at 9:48 PM
to me overall. I have two questions.
1. When should we use a resolved schema and when to use an unresolved one?
2. The FLIP mentions only resolved tables/views can be stored into a
catalog. Does that mean the getTable method should also return a resolved
object?
On Tue, Feb 9, 2021 at 6:29 PM Timo Wa
look good. I agree it is an
important step towards FLIP-129 and FLIP-136. Personally I feel
comfortable voting on the document.
Best,
Dawid
On 05/02/2021 16:09, Timo Walther wrote:
Hi everyone,
you might have seen that we discussed a better schema API in past as
part of FLIP-129 and FLIP-13
uteMultiSql() in
the future, right?
Best,
Jark
On Tue, 9 Feb 2021 at 16:37, Timo Walther wrote:
Hi everyone,
I understand Rui's concerns. `table.dml-sync` should not apply to
regular `executeSql`. Actually, this option makes only sense when
executing multi statements. Once we have a
`TableEn
Hi everyone,
I understand Rui's concerns. `table.dml-sync` should not apply to
regular `executeSql`. Actually, this option makes only sense when
executing multi statements. Once we have a
`TableEnvironment.executeMultiSql()` this config could be considered.
Maybe we can find a better generic
th a Flink job
detach mode. How about `table.dml-async`?
Thanks,
Timo
On 08.02.21 15:55, Jark Wu wrote:
Thanks Timo,
I'm +1 for option#2 too.
I think we have addressed all the concerns and can start a vote.
Best,
Jark
On Mon, 8 Feb 2021 at 22:19, Timo Walther wrote:
Hi Jark,
you are
tible, and provides flexible configurable behavior
3) sync for both batch and streaming DML, and can be
set to async via a configuration.
==> +0 for this, because it breaks all the compatibility, esp. our main
users.
Best,
Jark
On Mon, 8 Feb 2021 at 17:34, Timo Walther wrote:
Hi Jark, Hi
o Alternative 1:
We consider batch/streaming mode and block for batch INSERT INTO and async
for streaming INSERT INTO/STATEMENT SET.
And this behavior is consistent across CLI and files.
Best,
Jark
[1]:
https://github.com/apache/flink/blob/master/flink-end-to-end-tests/flink-end-to-end-tests-comm
Hi everyone,
you might have seen that we discussed a better schema API in past as
part of FLIP-129 and FLIP-136. We also discussed this topic during
different releases:
https://issues.apache.org/jira/browse/FLINK-17793
Jark and I had an offline discussion how we can finally fix this
shortco
about REMOVE
vs
DELETE though.
While flink doesn't need to follow hive syntax, as far as I know,
most
users who are requesting these features are previously hive users.
So I
wonder whether we can support both LIST/SHOW JARS and REMOVE/DELETE
JARS
as synonyms? It's just like lots
se
`ADD JAR` -> `CREATE JAR`,
`DELETE JAR` -> `DROP JAR`,
`LIST JAR` -> `SHOW JAR`.
*Regarding #5*: I agree with you that we'd better keep consistent.
*Regarding #6*: Yes. Most of the commands should belong to the table
environment. In the Summary section, I use the tag to identify
xecution for all of them or #2 enabled the rest modules and
return a warning to users? My personal preference goes to #1 for
simplicity. What do you think?
Best,
Jane
On Tue, Feb 2, 2021 at 3:53 PM Timo Walther wrote:
+1
@Jane Can you summarize our discussion in the JIRA issue?
Thanks,
T
Thanks for this great proposal Shengkai. This will give the SQL Client a
very good update and make it production ready.
Here is some feedback from my side:
1) SQL client specific options
I don't think that `sql-client.planner` and `sql-client.execution.mode`
are SQL Client specific. Similar t
Timo Walther created FLINK-21239:
Summary: Upgrade Calcite version to 1.28
Key: FLINK-21239
URL: https://issues.apache.org/jira/browse/FLINK-21239
Project: Flink
Issue Type: Improvement
vide consistent
results. Thus, I think we may need to think more from the users'
perspective.
Best,
Jark
On Mon, 1 Feb 2021 at 23:06, Timo Walther wrote:
Hi Leonard,
thanks for considering this issue as well. +1 for the proposed config
option. Let's start a voting thread once the
g the LOAD statement instead of CREATE, so I think it's fine
that it does some implicit things.
Best,
Jark
On Tue, 2 Feb 2021 at 00:48, Timo Walther wrote:
Not the module itself but the ModuleManager should handle this case, yes.
Regards,
Timo
On 01.02.21 17:35, Jane Chan wrote:
+1 to Jark
Feb 1, 2021 at 10:02 PM Timo Walther wrote:
+1 to Jark's proposal
I like the difference between just loading and actually enabling these
modules.
@Rui: I would use the same behavior as catalogs here. You cannot `USE` a
catalog without creating it before.
Another question is whether a LOAD
esult when
user use their streaming pipeline sql to run a batch pipeline(e.g
backfilling), and user also
can not control these function behavior.
How do you think ?
Thanks,
Leonard
在 2021年2月1日,18:23,Timo Walther 写道:
Parts of the FLIP can already be implemented without a completed votin
deprecated `TableFactory` class.
Regarding #1, I think the point lies in whether changing the resolution
order implies an `unload` operation explicitly (i.e., users could sense
it). What do others think?
Best,
Jane
On Mon, Feb 1, 2021 at 6:41 PM Timo Walther wrote:
IMHO I would rather un
rading the priority of the
loaded
module(s).
2. `LOAD/UNLOAD MODULE` v.s. `CREATE/DROP MODULE` syntax
Jark Wu and Nicholas Jiang proposed to use `CREATE/DROP MODULE`
instead
of `LOAD/UNLOAD MODULE` because
1) From a pure SQL user's perspective, maybe `CREATE MODULE + USE
MODUL
osal can
resolve
almost
all
user problems,the divergence is whether we need to spend
pretty
energy just
to get a bit more accurate semantics? I think we need a
tradeoff.
Best,
Leonard
[1]
https://trino.io/docs/current/functions/datetime.html#current_timestamp
<
https://trin
Jark Wu and Nicholas Jiang proposed to use `CREATE/DROP MODULE`
instead
of `LOAD/UNLOAD MODULE` because
1) From a pure SQL user's perspective, maybe `CREATE MODULE + USE
MODULE`
is easier to use rather than `LOAD/UNLOAD`.
2) This will be very similar to what the catalog used now.
Timo Walther created FLINK-21226:
Summary: Reintroduce TableColumn.of for backwards compatibility
Key: FLINK-21226
URL: https://issues.apache.org/jira/browse/FLINK-21226
Project: Flink
Issue
Timo Walther created FLINK-21225:
Summary: OverConvertRule does not consider distinct
Key: FLINK-21225
URL: https://issues.apache.org/jira/browse/FLINK-21225
Project: Flink
Issue Type: Bug
301 - 400 of 1423 matches
Mail list logo