Timo Walther created FLINK-14437:
Summary: Drop the legacy planner
Key: FLINK-14437
URL: https://issues.apache.org/jira/browse/FLINK-14437
Project: Flink
Issue Type: Sub-task
se verbs anywhere else (I think) in config
options.
Now for the "exec" or "execution", personally I like the longer
version as it is clearer.
So, to summarise, I would vote for "deployment", "execution", and
"pipeline" for job invariants, like the j
Hi Kostas,
can we still discuss the naming of the properties? For me, having
"execution" and "exector" as prefixes might be confusing in the future
and difficult to identify if you scan through a list of properties.
How about `deployment` and `execution`? Or `deployer` and `exec`?
Regards,
y good enough, I only left some
minor comments there.
Best,
Jark
On Fri, 4 Oct 2019 at 23:54, Timo Walther wrote:
Hi everyone,
I would like to propose FLIP-65 that describes how we want to deal with
data types and their inference/extraction in the Table API in the
future. I have collected many co
Hi all,
I agree with Jark. Having a voting with at least 3 binding votes makes
sense for API changes. It also forces people to question the
introduction of another config option that might make the configuration
of Flink more complicated. A FLIP is usually a bigger effort with long
term
+1
Thanks,
Timo
On 15.10.19 20:50, Bowen Li wrote:
Hi all,
I'd like to kick off a voting thread for FLIP-68: Extend Core Table System
with Pluggable Modules [1], as we have reached consensus in [2].
The voting period will be open for at least 72 hours, ending at 7pm Oct 18
UTC.
Thanks,
+1
Thanks,
Timo
On 15.10.19 19:04, Bowen Li wrote:
+1
On Tue, Oct 15, 2019 at 5:09 AM Jark Wu wrote:
+1 from my side.
Cheers,
Jark
On Tue, 15 Oct 2019 at 19:11, vino yang wrote:
+1
Best,
Vino
Aljoscha Krettek 于2019年10月15日周二 下午4:31写道:
+1
Best,
Aljoscha
On 14. Oct 2019, at
+1
Thanks,
Timo
On 15.10.19 17:07, Till Rohrmann wrote:
Sorry for the confusion. I should have checked with an external mail
client. Thanks a lot for the clarification.
Cheers,
Till
On Tue, Oct 15, 2019 at 2:07 PM Jark Wu wrote:
+1
It is a separate [VOTE] thread in my Mail client.
Best,
Hi Stephan,
+1 for keeping it in a separate repository for fast release cycles and
stability until it is mature enough. But we should definitely merge it
back to the core repo also for marketing reasons.
IMHO side projects tend to be overlooked by the outside world even
though they are
bit function instances, are we also planning to drop the
support of `table.select(bf1('a))` in Scala Table API?
Regards,
Jark
On Fri, 11 Oct 2019 at 15:34, Timo Walther wrote:
Thanks for your feedback Kurt and Jark.
I'm wondering why we allow setting `val bf1 = new BloomFilter(0.001)`,
`val
Hi everyone,
as mentioned in the [DISCUSS] thread of FLIP-54 [1]. The evolution of
ConfigOption is a very sensitive topic that touches many implementers.
We will therefore discard FLIP-54 and split it into several discussions
and FLIPs.
We propose FLIP-77 for introducing ConfigOptions with
and change them all together when refactoring in the near future.
Regards,
Jark
On Thu, 10 Oct 2019 at 21:12, Timo Walther wrote:
Hi Jark,
restricting one module instance per kind sounds good to me. Modules can
implement hashCode/equals and we can perform the check you metioned. The
equals()
tion Timo, that's sounds good to me. Let's
continue
to discuss other things.
Best,
Kurt
On Thu, Oct 10, 2019 at 4:14 PM Timo Walther <
twal...@apache.org> wrote:
The purpose of ConnectTableDescriptor was to have a programmatic way of
expressing `CREATE TABLE` with JavaDocs and I
, Kurt Young wrote:
Regarding to ConnectTableDescriptor, if in the end it becomes a builder
of CatalogTable, I would be ok with that. But it doesn't look like a builder
now, it's more like another form of TableSource/TableSink.
Best,
Kurt
On Thu, Oct 10, 2019 at 3:44 PM Timo Walther wrote:
Hi
Hi everyone,
thanks for the great discussion with a good outcome.
I have one last comment about:
void createTemporaryFunction(String path, UserDefinedFunction function);
We should encourage users to register a class instead of an instance. We
should enforce a default constructor in the
connector.type`,
`format.type`, `catalog.type`, etc...
Are we planning to change them all ?
Best,
Jark
On Thu, 10 Oct 2019 at 19:56, Timo Walther wrote:
Hi Jark,
we had a long offline discussion yesterday where we considered all
options again. The reasons why we decided for the updated desig
I also heard from other companies that upgrading to Python 3 is in
progress for data teams.
+1 for simplifying the code base with option 1).
Thanks,
Timo
On 08.10.19 16:34, Dian Fu wrote:
Hi everyone,
I would like to propose to drop Python 2 support(Currently Python 2.7, 3.5,
3.6, 3.7 are
mentations in runtime? Am I
missing
something?
Besides, I left some minor comments in the doc.
Best,
Jark
On Sat, 5 Oct 2019 at 08:42, Xuefu Z wrote:
I agree with Timo that the new table APIs need to be consistent.
I'd
go
further that an name (or id) is needed for module definition in
+1
Thanks for driving these efforts,
Timo
On 07.10.19 10:10, Dawid Wysakowicz wrote:
+1 for the FLIP.
Best,
Dawid
On 07/10/2019 08:45, Bowen Li wrote:
Hi all,
I'd like to start a new voting thread for FLIP-57 [1] on its latest status
despite [2], and we've reached consensus in [2] and
Hi everyone,
I would like to propose FLIP-65 that describes how we want to deal with
data types and their inference/extraction in the Table API in the
future. I have collected many comments, shortcomings, issues from users
and trainings in last years that went into the design. It completes
+1 for the syntax and their semantics
I think the implementation part is still a bit unclear to me because it
only ensures the current status but still does not solve future
requirements such as per-partition watermarks that need to be pushed
into a connector such as Kafka. We can also
whole session
entirely (temporary objects, objects derived from DataStream etc.)
I think it is ok to use instances for objects like Catalogs or Modules
and have an overlay on top of that that can create instances from
properties.
Best,
Dawid
On 01/10/2019 11:28, Timo Walther wrote:
Hi Bowen,
id points, and we can adopt the
suggestions.
To elaborate a bit on the new SQL syntax, it would imply that,
unlike
"SHOW FUNCTION" which only return function names, "SHOW ALL
[TEMPORARY]
FUNCTIONS" would return functions' fully qualified names with
catalog
and
db names.
On
Hi Bowen,
thanks for your response.
Re 2) I also don't have a better approach for this issue. It is similar
to changing the general TableConfig between two statements. It would be
good to add your explanation to the design document.
Re 3) It would be interesting to know about which "core"
Hi Bowen,
thanks for this proposal after our discussion around the FunctionCatalog
rework. I like the architecture proposed in the FLIP because it is also
based on existing concepts and just slightly modifies the code base.
However, I would like to discuss some unanswered questions:
1)
Hi Bowen,
thanks for postponing the voting and sorry for the inconvenience. For
the future, we should avoid starting voting threads if there hasn't been
a single response in the [DISCUSS] thread. Instead, the owner of the
FLIP should proactively try to ping people for feedback. Even if the
light that it has a more
narrow
focus and is really only a POJO for holding a bunch of config options
that
have to go together. What do you think?
Best,
Aljoscha
On 3. Sep 2019, at 14:08, Timo Walther wrote:
Hi Danny,
yes, this FLIP covers all the building blocks we need also for
unifica
Hi all,
I support Fabian's arguments. In my opinion, temporary objects should
just be an additional layer on top of the regular catalog/database
lookup logic. Thus, a temporary table or function has always highest
precedence and should be stable within the local session. Otherwise it
could
these, except that we
should
somewhat
stick
to what the industry does. But I also understand that
the
industry
is
already very divided on this.
Best,
Aljoscha
On 18. Sep 2019, at 11:41, Jark Wu
wrote:
Hi,
+1 to strive for reaching consensus on the remaining
topics.
We
are
close to
lt-in function with temporary function
The built-in/system namespace would not be writable for permanent
objects.
WDYT?
This way I think we can have benefits of both solutions.
Best,
Dawid
On Tue, 17 Sep 2019, 07:24 Timo Walther, wrote:
Hi Bowen,
I understand the potential benefit of over
Hi Dawid,
thanks for the design document. It fixes big concept gaps due to
historical reasons with proper support for serializability and catalog
support in mind.
I would not mind a registerTemporarySource/Sink, but the problem that I
see is that many people think that this is the
consider is how about moving the
User-Defined-Extensions subpages to corresponding broader topics?
Sources & Sinks >> Connect to external systems
Catalogs >> Connect to external systems
and then have a Functions sections with subsections:
functions
|- built in functions
Hi Bowen,
I understand the potential benefit of overriding certain built-in
functions. I'm open to such a feature if many people agree. However, it
would be great to still support overriding catalog functions with
temporary functions in order to prototype a query even though a
Hi Hanan,
the community is currently reworking parts of the architecture of Flink
SQL first for making it a good foundation for further tools around it
(see also FLIP-32 and following SQL-related FLIPs). In Flink 1.10 the
SQL Client will not receive major updates but it seems likely that
is always easier to expand the APIs later than reducing them.
Cheers,
Rong
On Mon, Sep 2, 2019 at 2:37 AM Timo Walther wrote:
Hi all,
I see a majority votes for `lit(12)` so let's adopt that in the FLIP.
The `$("field")` would consider Fabian's concerns so I would vote for
keeping it
pling flink-table and external systems.
- It always resides in front of catalog functions in ambiguous
function
reference order, just like in its own external system
- It is a special catalog function that doesn’t have a
schema/database
namespace
- It goes thru the same instantiation log
Hi Jingsong,
thanks for your proposal. Could you repost this email with the subject:
"[DISCUSS] FLIP-63: Rework table partition support"
Some people have filters for [DISCUSS] threads and it also makes
important emails more prominent visually.
Thanks,
Timo
On 04.09.19 09:11, JingsongLee
not set this method and for Python functions,
it will be set in the code-generated Java function by the framework. So, I
think we should declare the getLanguage() in FunctionDefinition for now.
(I'm not pretty sure what do you mean by saying that getKind() is final in
UserDefinedFunction?)
Best,
Jincheng
Timo
are basically the same as what Calcite does, I think
we are in the same line.
Best,
Danny Chan
在 2019年9月3日 +0800 PM7:57,Timo Walther ,写道:
This sounds exactly as the module approach I mentioned, no?
Regards,
Timo
On 03.09.19 13:42, Danny Chan wrote:
Thanks Bowen for bring up this topic, I think
Hi Danny,
yes, this FLIP covers all the building blocks we need also for
unification of the DDL properties.
Regards,
Timo
On 03.09.19 13:45, Danny Chan wrote:
with the new SQL DDL
based on properties as well as more connectors and formats coming up,
unified configuration becomes more
This sounds exactly as the module approach I mentioned, no?
Regards,
Timo
On 03.09.19 13:42, Danny Chan wrote:
Thanks Bowen for bring up this topic, I think it’s a useful refactoring to make
our function usage more user friendly.
For the topic of how to organize the builtin operators and
FLIP-58. You could you
extend the example to show how to specify these attributes in the FLIP?
Regards,
Timo
[1] https://flink.apache.org/contributing/code-style-and-quality-java.html
On 02.09.19 15:35, jincheng sun wrote:
Hi Timo,
Great thanks for your feedback. I would like to share my
Hi Bowen,
thanks for your proposal. Here are some thoughts:
1) We should not have the restriction "hive built-in functions can only
be used when current catalog is hive catalog". Switching a catalog
should only have implications on the cat.db.object resolution but not
functions. It would be
for internal use cases, maybe we can
avoid adding it to the configurable interface. We can add another interface
such as ExtractableConfigurable for internal usage.
What do you think?
Thanks,
Jiangjie (Becket) Qin
On Mon, Sep 2, 2019 at 11:59 PM Timo Walther wrote:
@Becket:
Regarding "grea
equired for shipping the
Configuration to TaskManager, so that we do not have to use java
serializability.
Best,
Dawid
On 02/09/2019 10:05, Timo Walther wrote:
Hi Becket,
Re 1 & 3: "values in configurations should actually be immutable"
I would also prefer immutability but most o
12:15 Uhr schrieb Timo Walther
:
I'm fine with `lit()`. Regarding `col()`, I initially suggested `ref()`
but I think Fabian and Dawid liked single char methods for the most
commonly used expressions.
Btw, what is your opinion on the names of commonly used methods such as
`isEqual`, `isGreaterOrEqua
Hi all,
the FLIP looks awesome. However, I would like to discuss the changes to
the user-facing parts again. Some feedback:
1. DataViews: With the current non-annotation design for DataViews, we
cannot perform eager state declaration, right? At which point during
execution do we know which
d sth similar to toBytes/fromBytes, but it puts itself to a
Configuration. Also just wanted to make sure we adjusted this part
slightly and now the ConfigOption takes ConfigurableFactory.
Best,
Dawid
On 30/08/2019 09:39, Timo Walther wrote:
Hi Becket,
thanks for the discussion.
1. ConfigOptions
/fromBytes, but it puts itself to a
Configuration. Also just wanted to make sure we adjusted this part
slightly and now the ConfigOption takes ConfigurableFactory.
Best,
Dawid
On 30/08/2019 09:39, Timo Walther wrote:
Hi Becket,
thanks for the discussion.
1. ConfigOptions in their curr
Hi everyone,
the Table API & SQL documentation was already in a very good shape in
Flink 1.8. However, in the past it was mostly presented as an addition
to DataStream API. As the Table and SQL world is growing quickly,
stabilizes in its concepts, and is considered as another top-level API
d the
description should clearly state all the usage of this ConfigOption.
3. I see, in that case, how about we name it something like
extractConfiguration()? I am just trying to see if we can make it clear
this is not something like fromBytes() and toBytes().
Thanks,
Jiangjie (Becket) Qin
On Thu, A
!
Best,
tison.
Jark Wu 于2019年8月28日周三 下午6:08写道:
Hi Timo,
The new changes looks good to me.
+1 to the FLIP.
Cheers,
Jark
On Wed, 28 Aug 2019 at 16:02, Timo Walther wrote:
Hi everyone,
after some last minute changes yesterday, I would like to start a new
vote on FLIP-54. The discussion seems
would prefer ‘lit()’ over ‘val()’ since val is a keyword in Scala. Assuming
the intention is to make the dsl ergonomic for Scala developers.
Seth
On Aug 28, 2019, at 7:58 AM, Timo Walther wrote:
Hi David,
thanks for your feedback. I was also skeptical about 1 char method names, I restored
On Tue, Aug 27, 2019 at 11:34 PM Timo Walther wrote:
Hi everyone,
I updated the FLIP proposal one more time as mentioned in the voting
thread. If there are no objections, I will start a new voting thread
tomorrow at 9am Berlin time.
Thanks,
Timo
On 22.08.19 14:19, Timo Walther wrote:
Hi
ed
to mean the value of the "foo" column, or a literal string.
David
On Tue, Aug 27, 2019 at 5:45 PM Timo Walther wrote:
Hi David,
thanks for your feedback. With the current design, the DSL would be free
of any ambiguity but it is definitely more verbose esp. around defining
values.
I
Hi everyone,
after some last minute changes yesterday, I would like to start a new
vote on FLIP-54. The discussion seems to have reached an agreement. Of
course this doesn't mean that we can't propose further improvements on
ConfigOption's and Flink configuration in general in the future. It
able than the current
Java DSL. In a training context it will be easy to teach, but I wonder
if we can find a way to make it look less alien at first glance.
David
On Wed, Aug 21, 2019 at 1:33 PM Timo Walther wrote:
Hi everyone,
some of you might remember the discussion I started end of March [1]
ab
Hi everyone,
I updated the FLIP proposal one more time as mentioned in the voting
thread. If there are no objections, I will start a new voting thread
tomorrow at 9am Berlin time.
Thanks,
Timo
On 22.08.19 14:19, Timo Walther wrote:
Hi everyone,
thanks for all the feedback we have
; objects immutable.
Best,
Dawid
On 27/08/2019 13:28, Timo Walther wrote:
Hi everyone,
thanks for the great feedback we have received for the draft of
FLIP-54. The discussion seems to have reached an agreement. Of course
this doesn't mean that we can't propose further improvements on
Confi
Hi everyone,
thanks for the great feedback we have received for the draft of FLIP-54.
The discussion seems to have reached an agreement. Of course this
doesn't mean that we can't propose further improvements on
ConfigOption's and Flink configuration in general in the future. It is
just one
Thanks to everyone who contributed to this release. Great team work!
Regards,
Timo
Am 22.08.19 um 14:16 schrieb JingsongLee:
Congratulations~~~ Thanks gordon and everyone~
Best,
Jingsong Lee
--
From:Oytun Tez
Send
for maybe validating the entire global
configuration before submitting a job in the future.
Please take another look if you find time. I hope we can proceed with
the voting process if there are no objections.
Regards,
Timo
Am 19.08.19 um 12:54 schrieb Timo Walther:
Hi Stephan,
thanks
Hi everyone,
some of you might remember the discussion I started end of March [1]
about introducing a new Java DSL for Table API that is not embedded in a
string.
In particular, it solves the following issues:
- No possibility of deprecating functions
- Missing documentation for users
-
+1
Am 21.08.19 um 13:21 schrieb Stephan Ewen:
+1
On Wed, Aug 21, 2019 at 1:07 PM Kostas Kloudas wrote:
Hi all,
Following the FLIP process, this is a voting thread dedicated to the
FLIP-52.
As shown from the corresponding discussion thread [1], we seem to agree
that
the Program interface
Thanks for summarizing the discussion Andrey, +1 to this style.
Regards,
Timo
Am 21.08.19 um 11:57 schrieb Andrey Zagrebin:
Hi All,
It looks like we have reached a consensus regarding the last left question.
I suggest the following final summary:
- Use @Nullable annotation where you do
d
in
the
release
announcement that it is a preview feature,
I
would
not
block
this
release
because of it.
Nevertheless, it would be important to
mention
this
explicitly
in
the
release notes [1].
Regards,
Gordon
[1]
https://github.com/apache/flink/pull/9438
On Thu, Aug 15, 2019 at 11:2
aring this FLIP!
Client API enhancement benefit from this evolution which
hopefully provides a better view of configuration of Flink.
In client API enhancement, we likely make the deployment
of cluster and submission of job totally defined by configuration.
Will take a look at the document in days.
Hi everyone,
Dawid and I are working on making parts of ExecutionConfig and
TableConfig configurable via config options. This is necessary to make
all properties also available in SQL. Additionally, with the new SQL DDL
based on properties as well as more connectors and formats coming up,
+1 for this.
Thanks,
Timo
Am 15.08.19 um 15:57 schrieb JingsongLee:
Hi Flink devs,
I would like to start the voting for FLIP-51 Rework of the Expression
Design.
FLIP wiki:
https://cwiki.apache.org/confluence/display/FLINK/FLIP-51%3A+Rework+of+the+Expression+Design
Discussion thread:
pass
still.
Regards,
Timo
Am 15.08.19 um 10:43 schrieb JingsongLee:
Hi @Timo Walther @Dawid Wysakowicz:
Now, flink-planner have some legacy DataTypes:
like: legacy decimal, legacy basic array type info...
And If the new type inference infer a Decimal/VarChar with precision, there
should
Hi Kurt,
I agree that this is a serious bug. However, I would not block the
release because of this. As you said, there is a workaround and the
`execute()` works in the most common case of a single execution. We can
fix this in a minor release shortly after.
What do others think?
Regards,
+1
Thanks for all the efforts you put into this for documenting how the
project operates.
Regards,
Timo
Am 12.08.19 um 10:44 schrieb Aljoscha Krettek:
+1
On 11. Aug 2019, at 10:07, Becket Qin wrote:
Hi all,
I would like to start a voting thread on the project bylaws of Flink. It
aims
Timo Walther created FLINK-13691:
Summary: Remove deprecated query config
Key: FLINK-13691
URL: https://issues.apache.org/jira/browse/FLINK-13691
Project: Flink
Issue Type: Improvement
Hi Kurt,
I posted my opinion around this particular example in FLINK-13225.
Regarding the definition of "feature freeze": I think it is good to
write down more of the implicit processes that we had in the past. The
bylaws, coding guidelines, and a better FLIP process are very good steps
Timo Walther created FLINK-13649:
Summary: Improve error message when job submission was not
successful
Key: FLINK-13649
URL: https://issues.apache.org/jira/browse/FLINK-13649
Project: Flink
Timo Walther created FLINK-13600:
Summary: TableEnvironment.connect() is not usable
Key: FLINK-13600
URL: https://issues.apache.org/jira/browse/FLINK-13600
Project: Flink
Issue Type: Bug
Hi everyone,
I would vote for using Optional only as method return type for
non-performance critical code. Nothing more. No fields, no method
parameters. Method parameters can be overloaded and internally a class
can work with nulls and @Nullable. Optional is meant for API method
return
Timo Walther created FLINK-13466:
Summary: Rename Batch/StreamTableSourceFactory methods for
avoiding name clashes
Key: FLINK-13466
URL: https://issues.apache.org/jira/browse/FLINK-13466
Project
Timo Walther created FLINK-13463:
Summary: SQL VALUES might fail for Blink planner
Key: FLINK-13463
URL: https://issues.apache.org/jira/browse/FLINK-13463
Project: Flink
Issue Type: Bug
Hi Danny,
thanks for working on this issue and writing down the concept
suggestion. We are currently still in the progress of finalizing the 1.9
release. Having proper streaming DDL support will definitely be part of
Flink 1.10. I will take a look at the whole DDL efforts very soon once
the
Timo Walther created FLINK-13458:
Summary: ThreadLocalCache clashes for Blink planner
Key: FLINK-13458
URL: https://issues.apache.org/jira/browse/FLINK-13458
Project: Flink
Issue Type: Sub
Timo Walther created FLINK-13429:
Summary: SQL Client end-to-end test fails on Travis
Key: FLINK-13429
URL: https://issues.apache.org/jira/browse/FLINK-13429
Project: Flink
Issue Type: Bug
Timo Walther created FLINK-13419:
Summary: TableEnvironment.explain() has side-effects on
ExecutionConfig
Key: FLINK-13419
URL: https://issues.apache.org/jira/browse/FLINK-13419
Project: Flink
Timo Walther created FLINK-13385:
Summary: Align Hive data type mapping with FLIP-37
Key: FLINK-13385
URL: https://issues.apache.org/jira/browse/FLINK-13385
Project: Flink
Issue Type: Bug
Thanks for summarizing our offline discussion Dawid! Even though I would
prefer solution 1 instead of releasing half-baked features, I also
understand that the Table API should not further block the next release.
Therefore, I would be fine with solution 3 but introduce the new
user-facing
+1 sounds good to inform people about instabilities or other issues
Regards,
Timo
Am 22.07.19 um 09:58 schrieb Haibo Sun:
+1. Sounds good.Letting the PR creators know the build results of the master
branch can help to determine more quickly whether Travis failures of their PR
are an exact
Timo Walther created FLINK-13350:
Summary: Distinguish between temporary tables and persisted tables
Key: FLINK-13350
URL: https://issues.apache.org/jira/browse/FLINK-13350
Project: Flink
Timo Walther created FLINK-13335:
Summary: Align the SQL DDL with FLIP-37
Key: FLINK-13335
URL: https://issues.apache.org/jira/browse/FLINK-13335
Project: Flink
Issue Type: Sub-task
Timo Walther created FLINK-13273:
Summary: Allow switching planners in SQL Client
Key: FLINK-13273
URL: https://issues.apache.org/jira/browse/FLINK-13273
Project: Flink
Issue Type: New
Timo Walther created FLINK-13264:
Summary: Remove planner class clashes for planner type inference
lookups
Key: FLINK-13264
URL: https://issues.apache.org/jira/browse/FLINK-13264
Project: Flink
Timo Walther created FLINK-13262:
Summary: Add documentation for the new Table & SQL API type system
Key: FLINK-13262
URL: https://issues.apache.org/jira/browse/FLINK-13262
Project: F
Timo Walther created FLINK-13191:
Summary: Add a simplified UDF type extraction
Key: FLINK-13191
URL: https://issues.apache.org/jira/browse/FLINK-13191
Project: Flink
Issue Type: Sub-task
Timo Walther created FLINK-13078:
Summary: Add a type parser utility
Key: FLINK-13078
URL: https://issues.apache.org/jira/browse/FLINK-13078
Project: Flink
Issue Type: Sub-task
Timo Walther created FLINK-13045:
Summary: Move Scala expression DSL to flink-table-api-scala
Key: FLINK-13045
URL: https://issues.apache.org/jira/browse/FLINK-13045
Project: Flink
Issue
Timo Walther created FLINK-13028:
Summary: Move expression resolver to flink-table-api-java
Key: FLINK-13028
URL: https://issues.apache.org/jira/browse/FLINK-13028
Project: Flink
Issue Type
Timo Walther created FLINK-12996:
Summary: Add predefined type validators, strategies, and
transformations
Key: FLINK-12996
URL: https://issues.apache.org/jira/browse/FLINK-12996
Project: Flink
Timo Walther created FLINK-12968:
Summary: Add a casting utility
Key: FLINK-12968
URL: https://issues.apache.org/jira/browse/FLINK-12968
Project: Flink
Issue Type: Sub-task
Thanks for working on this great design document Jark. I think having
well-defined terminilogy and semantics around tables, changelogs, table
sources/sinks, and DDL should have been done much earlier. I will take a
closer look at the concepts and give feedback soon. I think having those
Timo Walther created FLINK-12924:
Summary: Introduce basic type inference interfaces
Key: FLINK-12924
URL: https://issues.apache.org/jira/browse/FLINK-12924
Project: Flink
Issue Type: Sub
Timo Walther created FLINK-12899:
Summary: Introduce a resolved expression with data type
Key: FLINK-12899
URL: https://issues.apache.org/jira/browse/FLINK-12899
Project: Flink
Issue Type
701 - 800 of 1347 matches
Mail list logo