Thanks for being the release manager and great job! @Jincheng
Best,
Kurt
On Wed, Jul 3, 2019 at 10:19 AM Tzu-Li (Gordon) Tai
wrote:
> Thanks for being the release manager @jincheng sun
> :)
>
> On Wed, Jul 3, 2019 at 10:16 AM Dian Fu wrote:
>
>> Awesome! Thanks a lot for being the release ma
gregate:
> >>>>>>> > input.localKeyBy(..).aggregate(agg1).keyBy(..).aggregate(agg2)
> >>>>>>> **NOT
> >>>>>>> > SUPPORT**
> >>>>>>> > b) For windowed aggregate:
> >>>>>>> >
>
t;>>>>>> harder to
> >>>>>>>>>> accomplish in a short period of time and may deserve
> >>>its own
> >>>>>>> separate
> >>>>>>>>>> discussion". Thus I didn't incl
as.
Let me know if this makes sense to you.
Best,
Kurt
On Thu, Jul 4, 2019 at 4:32 PM jincheng sun
wrote:
> Hi All,
>
> @Kurt Young one user-defined table aggregate function
> can be used in both with(out) keys case, and we do not introduce any other
> aggregations. just li
;> Gordon
>>
>> On Tue, Jun 25, 2019 at 9:05 PM Chesnay Schepler
>> wrote:
>>
>> > On the fine-grained recovery / batch scheduling side we could make good
>> > use of another week.
>> > Currently we are on track to have the _feature_ merged, b
okup function and upsert sink
4. StreamExecutionEnvironment supports executing job with StreamGraph, and
blink planner should set proper properties to StreamGraph
5. Set resource profiles to task and enable managed memory as resource
profile
Best,
Kurt
On Fri, Jul 5, 2019 at 9:37 PM Kurt Young w
>>>> >>>
> >>>> >>>> >>> Besides, I don't think that's the ultimate reason
> >>>> for lack of
> >>>> >>>> build
> >>>> >>>>
ies to StreamGraph
> > 5. Set resource profiles to task and enable managed memory as resource
> > profile
> >
> > Best,
> > Kurt
> >
> >
> > On Fri, Jul 5, 2019 at 9:37 PM Kurt Young wrote:
> >
> >> Hi devs,
> >>
> >> It'
Congratulations Rong!
Best,
Kurt
On Thu, Jul 11, 2019 at 10:53 PM Kostas Kloudas wrote:
> Congratulations Rong!
>
> On Thu, Jul 11, 2019 at 4:40 PM Jark Wu wrote:
>
>> Congratulations Rong Rong!
>> Welcome on board!
>>
>> On Thu, 11 Jul 2019 at 22:25, Fabian Hueske wrote:
>>
>>> Hi everyone,
o wrote:
> >
> > Any news on this?
> >
> > Thanks,
> > Qi
> >
> >> On Jul 11, 2019, at 11:13 PM, Stephan Ewen wrote:
> >>
> >> Number (6) is not a feature but a bug fix, so no need to block on
> that...
> >>
> >> On
Hi devs,
I just created the branch for the Flink 1.9 release [1] and updated the
version on master to 1.10-SNAPSHOT. This unblocks the master from
merging new features into it.
If you are working on a 1.9 relevant bug fix, then it is important to merge
it into the release-1.9 and master branch.
Sorry about that and thanks Gordon for fixing this!
Best,
Kurt
On Mon, Jul 15, 2019 at 5:43 PM Tzu-Li (Gordon) Tai
wrote:
> Done.
>
> Thanks for the reminder and help with the Jenkins deployment setup!
>
> Cheers,
> Gordon
>
> On Mon, Jul 15, 2019 at 3:54 PM Chesnay Schepler
> wrote:
>
>> Ple
Congrats Becket!
Best,
Kurt
On Thu, Jul 18, 2019 at 4:12 PM JingsongLee
wrote:
> Congratulations Becket!
>
> Best, Jingsong Lee
>
>
> --
> From:Congxian Qiu
> Send Time:2019年7月18日(星期四) 16:09
> To:dev@flink.apache.org
> Subject:R
Congratulations Zhijiang!
Best,
Kurt
On Tue, Jul 23, 2019 at 8:59 AM Biao Liu wrote:
> Congrats Zhijiang. Well deserved!
>
> SHI Xiaogang 于2019年7月23日 周二08:35写道:
>
> > Congratulations Zhijiang!
> >
> > Regards,
> > Xiaogang
> >
> > Guowei Ma 于2019年7月23日周二 上午8:08写道:
> >
> > > Congratulations Zh
Thanks Dawid for driving this discussion.
Personally, I would +1 for using option #2 for 1.9.0 and go with option #1
in 1.10.0.
Regarding Xuefu's concern about option #1, I think we could also try to
reuse the in-memory catalog
for the builtin temporary table storage.
Regarding to option #2 and o
Thanks everyone!
It is really exciting and honored to be part of such a great community.
Looking forward to continue to push Flink forward!
Best,
Kurt
On Wed, Jul 24, 2019 at 9:07 AM Becket Qin wrote:
> Congrats, Kurt. Well deserved!
>
> Jiangjie (Becket) Qin
>
> On Wed, Jul 24, 2019 at 1:11
Hi Flink devs,
RC1 for Apache Flink 1.9.0 has been created. Just as RC0, this is still
a preview-only RC to drive the current testing efforts. This has all the
artifacts that we would typically have for a release, except for a source
code tag and a PR for the release announcement.
RC1 contains th
Update: RC1 for 1.9.0 has been created. Please see [1] for the preview
source / binary releases and Maven artifacts.
Best,
Kurt
[1]
http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/PREVIEW-Apache-Flink-1-9-0-release-candidate-1-td31233.html
On Tue, Jul 30, 2019 at 2:36 PM Tzu-Li (
Congrats Hequn!
Best,
Kurt
On Wed, Aug 7, 2019 at 5:06 PM jincheng sun
wrote:
> Hi everyone,
>
> I'm very happy to announce that Hequn accepted the offer of the Flink PMC
> to become a committer of the Flink project.
>
> Hequn has been contributing to Flink for many years, mainly working on
>
+1 to include this in 1.9.0, adding some examples doesn't look like new
feature to me.
BTW, I am also trying this tutorial based on release-1.9 branch, but
blocked by:
git clone --branch release-1.10-SNAPSHOT
g...@github.com:apache/flink-playgrounds.git
Neither 1.10 nor 1.9 exists in flink-playgr
> # gpg --batch --keyserver "$server" --recv-keys "$GPG_KEY" && break
> || : ; \
> # done && \
> # gpg --batch --verify flink.tgz.asc flink.tgz; \
> # gpgconf --kill all; \
> # rm -rf "$GNUPGHOME" flink.tgz.asc; \
> # \
>
Hi Stephan,
Thanks for bringing this up. I think it's important and a good time to
discuss what
does *feature freeze* really means. At least to me, seems I have some
misunderstandings with this comparing to other community members. But as
you
pointed out in the jira and also in this mail, I think
Hi Zili,
Thanks for the heads up. The 2 issues you mentioned were opened by me. We
have
found the reason of the second issue and a PR was opened for it. As said in
jira, the
issue was just a testing problem, should not be blocker of 1.9.0 release.
However,
we will still merge it into 1.9 branch.
cc user-zh mailing list, since there are lots of chinese speaking people.
Best,
Kurt
On Tue, Aug 13, 2019 at 4:02 PM WangHengwei wrote:
> Hi all,
>
>
> I'm working on [FLINK-13405] Translate "Basic API Concepts" page into
> Chinese. I have a problem.
>
> Usually we translate "Data Sourc
+1 (binding)
Best,
Kurt
On Wed, Aug 14, 2019 at 1:34 AM Yun Tang wrote:
> +1 (non-binding)
>
> But I have a minor question about "code change" action, for those
> "[hotfix]" github pull requests [1], the dev mailing list would not be
> notified currently. I think we should change the descripti
nically
> > > > > > > >>> speaking it is fine to change it later. It is just better
> if
> > we
> > > > > could
> > > > > > > >>> avoid
> > > > > > > >>> doing that.
> > > > > > > >>>
> > > >
gt; - in all the testing I had fine-grained recovery (region failover)
> > enabled but I didn’t simulate any failures
> >
> > > On 14. Aug 2019, at 15:20, Kurt Young wrote:
> > >
> > > Hi,
> > >
> > > Thanks for preparing this release cand
Congratulations Andery!
Best,
Kurt
On Thu, Aug 15, 2019 at 10:09 AM Biao Liu wrote:
> Congrats!
>
> Thanks,
> Biao /'bɪ.aʊ/
>
>
>
> On Thu, 15 Aug 2019 at 10:03, Jark Wu wrote:
>
> > Congratulations Andrey!
> >
> >
> > Cheers,
> > Jark
> >
> > On Thu, 15 Aug 2019 at 00:57, jincheng sun
> > w
; I think it’s an OK restriction to have for now
> > - in all the testing I had fine-grained recovery (region failover)
> > enabled but I didn’t simulate any failures
> >
> > > On 14. Aug 2019, at 15:20, Kurt Young wrote:
> > >
> > > Hi,
> > >
>
n. We can
> > fix this in a minor release shortly after.
> >
> > What do others think?
> >
> > Regards,
> > Timo
> >
> >
> > Am 15.08.19 um 11:23 schrieb Kurt Young:
> > > HI,
> > >
> > > We just find a serious bug ar
-dist, but I cannot
find it in
the binary distribution of RC2.
Best,
Kurt
On Thu, Aug 15, 2019 at 6:19 PM Kurt Young wrote:
> Hi Gordon & Timo,
>
> Thanks for the feedback, and I agree with it. I will document this in the
> release notes.
>
> Best,
> Kurt
>
>
> O
ng since we don't have a pre-built version of the WebUI in the
> source.
>
> On 15/08/2019 15:22, Kurt Young wrote:
> > After going through the licenses, I found 2 suspicions but not sure if
> they
> > are
> > valid or not.
> >
> > 1. flink-state-p
>From SQL's perspective, distributed cross join is a valid feature but not
very
urgent. Actually this discuss reminds me about another useful feature
(sorry
for the distraction):
when doing broadcast in batch shuffle mode, we can make each producer only
write one copy of the output data, but not f
Thanks for the updates, Jark! I have subscribed the ML and everything
looks good now.
Best,
Kurt
On Mon, Aug 26, 2019 at 11:17 AM Jark Wu wrote:
> Hi all,
>
> Sorry it take so long to get back. I have some good news.
>
> After some investigation and development and the help from Chesnay, we
>
_state.html
> <
> https://ci.apache.org/projects/flink/flink-docs-stable/dev/stream/state/broadcast_state.html
> >
> [2] https://github.com/apache/flink/pull/7713 <
> https://github.com/apache/flink/pull/7713>
>
> > On 26 Aug 2019, at 09:35, Kurt Young wrote:
> >
> &
one suggestion: we could also filter all notifications about *Cancelled*
builds.
Best,
Kurt
On Tue, Aug 27, 2019 at 10:53 AM jincheng sun
wrote:
> Great Job Jark :)
> Best, Jincheng
>
> Kurt Young 于2019年8月26日周一 下午6:38写道:
>
> > Thanks for the updates, Jark! I have
Hi Zili,
Thanks for the proposal, I had similar confusion in the past with your
point #2.
Force rebase to master before merging can solve some problems, but it also
introduces new problem. Given the CI testing time is quite long (couple of
hours)
now, it's highly possible that before your test whi
+1 to the general idea and thanks for driving this. I think the new
structure is
more clear than the old one, and i have some suggestions:
1. How about adding a "Architecture & Internals" chapter? This can help
developers
or users who want to contribute more to have a better understanding about
Ta
Thanks Xingtong for driving this effort, I haven't finished the whole
document yet,
but have couple of questions:
1. Regarding to network memory, the document said it will be derived by
framework
automatically. I'm wondering whether we should delete this dimension from
user-
facing API?
2. Regard
Thanks Bowen for driving this.
+1 for the general idea. It makes the function resolved behavior more
clear and deterministic. Besides, the user can use all hive built-in
functions, which is a great feature.
I only have one comment, but maybe it may touch your design so I think
it would make sense
Does this only affect the functions and operations we currently have in SQL
and
have no effect on tables, right? Looks like this is an orthogonal concept
with Catalog?
If the answer are both yes, then the catalog function will be a weird
concept?
Best,
Kurt
On Tue, Sep 3, 2019 at 8:10 PM Danny C
catalog".
> >>>
> >>> Yes, I've unified #3 and #4 but it seems I didn't update some part of
> >>> the doc. I've modified those sections, and they are up to date now.
> >>>
> >>> In short, now built-in function of e
+1 to add JSON support to Flink. We also see lots of requirements for JSON
related functions in our internal platform. Since these are already SQL
standard, I think it's a good time to add them to Flink.
Best,
Kurt
On Thu, Sep 5, 2019 at 10:37 AM Qi Luo wrote:
> We also see strong demands from
Congratulations Klou!
Best,
Kurt
On Sat, Sep 7, 2019 at 2:37 PM ying wrote:
> Congratulations Kostas!
>
> On Fri, Sep 6, 2019 at 11:21 PM Gary Yao wrote:
>
> > Congratulations Klou!
> >
> > On Sat, Sep 7, 2019 at 6:21 AM Thomas Weise wrote:
> >
> > > Congratulations!
> > >
> > >
> > > On Fri
+1 for FLIP-53.
I would like to raise one minor concern regarding to implementing
request absolute amount of memory case. Currently, it will be
translated to a memory fraction during compile, and translate back
to absolute value during execution. There is a risk that the user might
get less than h
+1 (binding)
- build from source and passed all tests locally
- checked the difference between 1.8.1 and 1.8.2, no legal risk found
- went through all commits checked in between 1.8.1 and 1.8.2, make
sure all the issues set the proper "fixVersion" property
Best,
Kurt
On Mon, Sep 9, 2019 at 8:45
+1 to this feature, I left some comments on google doc.
Another comment is I think we should do some reorganize about the content
when you converting this to a cwiki page. I will have some offline
discussion
with you.
Since this feature seems to be a fairly big efforts, so I suggest we can
settle
Hi Srikanth,
AFAIK, there are quite some companies already using Flink streaming
SQL to back their production systems, like realtime data warehouse. If
you met some issues when trying streaming sql, I would suggest you to
send the problem to user@ml, where you can receive some helps.
Best,
Kurt
After some review and discussion in the google document, I think it's time
to
convert this design to a cwiki flip page and start voting process.
Best,
Kurt
On Mon, Sep 9, 2019 at 7:46 PM Jark Wu wrote:
> Hi all,
>
> Thanks all for so much feedbacks received in the doc so far.
> I saw a general
t;>
>> Many of them are cases of Flink-SQL.
>>
>>
>> Best,
>>
>> Forward
>>
>> srikanth flink 于2019年9月16日周一 下午9:39写道:
>>
>> > Hi Kurt,
>> >
>> > thanks for quick response. Is the email user@ml?
>> >
>
ke this?
> >
> > SQL
> > - Overview
> > - Data Manipulation Statements (all operations available in SQL)
> > - Data Definition Statements (DDL syntaxes)
> > - Pattern Matching
> >
> > It renames "Full Reference" to "Data Manipulatio
Hi Jun,
Thanks for bringing this up, in general I'm +1 on this feature. As
you might know, there is another ongoing efforts about such kind
of table sink, which covered in newly proposed partition support
reworking[1]. In this proposal, we also want to introduce a new
file system connector, which
Kurt:
> thank you very much.
> I will take a closer look at the FLIP-63.
>
> I develop this PR, the underlying is StreamingFileSink, not
> BuckingSink, but I gave him a name, called Bucket.
>
>
> On 09/17/2019 10:57,Kurt Young
> wrote:
>
> Hi Ju
3 and see if there is a better solution to
> combine these two functions. I am very willing to join this development.
>
>
>
> -- 原始邮件 --
> *发件人:* "Kurt Young";
> *发送时间:* 2019年9月17日(星期二) 中午11:19
> *收件人:* "Jun Zhang"<825875..
Hi all,
Sorry to join this party late. Big +1 to this flip, especially for the
dropping
"registerTableSink & registerTableSource" part. These are indeed legacy
and we should try to unify them through CatalogTable after we introduce
the concept of Catalog.
>From my understanding, what we can regis
-SecocBqzUh7zY6HBYcfMlG_0z-JAcuZkCvsmN3LrOw/edit?ts=5d8258cd
>
> On Mon, 16 Sep 2019 at 16:12, Kurt Young wrote:
>
> > After some review and discussion in the google document, I think it's
> time
> > to
> > convert this design to a cwiki flip page and start voting pro
IIUC it's good to see that both serializable (tables description from DDL)
and unserializable (tables with DataStream underneath) tables are treated
unify with CatalogTable.
Can I also assume functions that either come from a function class (from
DDL)
or function objects (newed by user) will also
Looks like I'm the only person who is willing to +1 to #2 for now :-)
But I would suggest to change the keyword from GLOBAL to
something like BUILTIN.
I think #2 and #3 are almost the same proposal, just with different
format to indicate whether it want to override built-in functions.
My biggest
And let me make my vote complete:
-1 for #1
+1 for #2 with different keyword
-0 for #3
Best,
Kurt
On Thu, Sep 19, 2019 at 4:40 PM Kurt Young wrote:
> Looks like I'm the only person who is willing to +1 to #2 for now :-)
> But I would suggest to change the keyword from GLOBAL to
; as
> BatchExecLookupJoinRule/StreamExecLookupJoinRule,
> PushProjectIntoTableSourceScanRule,
> CommonLookupJoin). So how should we deal with this?
>
> *Best Regards,*
> *Zhenghua Gao*
>
>
> On Wed, Feb 5, 2020 at 2:36 PM Kurt Young wrote:
>
> > Hi all,
> >
> > I&
Hi Zhenghua,
After removing TableSource::getTableSchema, during optimization, I could
imagine
the schema information might come from relational nodes such as TableScan.
Best,
Kurt
On Wed, Feb 5, 2020 at 8:24 PM Kurt Young wrote:
> Hi Jingsong,
>
> Yes current TableFactory is not
Hi dev,
Currently I want to remove some already deprecated methods from
TableEnvironment which annotated with @PublicEnvolving. And I also created
a discussion thread [1] to both dev and user mailing lists to gather
feedback on that. But I didn't find any matching rule in Flink bylaw [2] to
follow
ich I don't consider a super
> > strong guarantee to users (we still shouldn't remove them lightly, but
> > we can if we have to...)
> >
> > Best,
> > Aljoscha
> >
> > On 07.02.20 04:40, Kurt Young wrote:
> >> Hi dev,
> >>
> >
e feedback anytime here
or in jira
if you have other opinions.
Best,
Kurt
On Wed, Feb 5, 2020 at 8:26 PM Kurt Young wrote:
> Hi Zhenghua,
>
> After removing TableSource::getTableSchema, during optimization, I could
> imagine
> the schema information might come from relational nodes su
e future. We can use this to replace some
> of the tests. Otherwise I guess we should come up with a better test
> infrastructure to make defining source not necessary anymore.
>
> Regards,
> Timo
>
>
> On 07.02.20 11:24, Kurt Young wrote:
> > Thanks all for your feedback
+1 (binding)
- verified signatures and checksums
- start local cluster, run some examples, randomly play some sql with sql
client, no suspicious error/warn log found in log files
- repeat above operation with both scala 2.11 and 2.12 binary
Best,
Kurt
On Mon, Feb 10, 2020 at 6:38 PM Yang Wang
Congratulations to everyone involved!
Great thanks to Yu & Gary for being the release manager!
Best,
Kurt
On Thu, Feb 13, 2020 at 10:06 AM Hequn Cheng wrote:
> Great thanks to Yu & Gary for being the release manager!
> Also thanks to everyone who made this release possible!
>
> Best, Hequn
>
>
Regarding to "fromQuery" is confusing users with "Table from(String
tableName)", I have
a just opposite opinion. I think this "fromXXX" pattern can make users
quite clear when they
want to get a Table from TableEnvironment. Similar interfaces will also
include like "fromElements".
Regarding to the
gt; "addUpdate" method and "addDelete" method to support them.
> >
> > Regarding to the "Inserts addInsert", maybe we can add a
> "DmlBatchBuilder".
> >
> > open to more discussion
> >
> > Best,
> > godfrey
> >
Hi everyone,
I'm very happy to announce that Jingsong Lee accepted the offer of the
Flink PMC to
become a committer of the Flink project.
Jingsong Lee has been an active community member for more than a year now.
He is
mainly focus on Flink SQL, played an essential role during blink planner
mergi
+1 (binding)
On Fri, Feb 21, 2020 at 11:25 AM lining jing wrote:
> +1 (non-binding)
> It lists all log files, the user could see the GC log.
>
> Xintong Song 于2020年2月21日周五 上午10:44写道:
>
> > +1 (non-binding)
> >
> > I like the ideas of having a list of all log files, and make them
> > downloadab
Some questions related to "managed memory":
1. Should the managed memory be part of direct memory?
2. Should the shuffle memory also be part of the managed memory?
Best,
Kurt
On Fri, Feb 21, 2020 at 10:41 AM Xintong Song wrote:
> Thanks for driving this FLIP, Yadong.
>
> +1 (non-binding) for
Hi Yadong,
Thanks for the proposal, it's a useful feature, especially for batch jobs.
But according
to the examples you gave, I can't tell whether i got required information
from that.
Can you replace the demo job to a more complex batch job and then we can
see some
differences of start/stop time
+1 (binding)
On Fri, Feb 21, 2020 at 1:09 AM Zhijiang
wrote:
> +1 (binding).
> It seems more clearly and directly to highlight the back pressured vertex
> in topology, which can raise the attention of users.
>
> Best,
> Zhijiang
>
>
> ---
I agree with Jark, even if we have pending slots now, a dedicated tab seems
to be too much.
Best,
Kurt
On Fri, Feb 21, 2020 at 2:12 PM Jark Wu wrote:
> Thanks Yadong,
>
> I think a pending slot view will be helpful. But will it be verbose when
> there is no pending slot, but a "pending slot" i
+1 (binding)
Best,
Kurt
On Fri, Feb 28, 2020 at 9:15 AM Terry Wang wrote:
> I look through the whole design and it’s a big improvement of usability on
> TableEnvironment’s api.
>
> +1 (non-binding)
>
> Best,
> Terry Wang
>
>
>
> > 2020年2月27日 14:59,godfrey he 写道:
> >
> > Hi everyone,
> >
> >
+1
Best,
Kurt
On Mon, Mar 2, 2020 at 5:32 PM Jingsong Lee wrote:
> +1 from my side.
>
> Best,
> Jingsong Lee
>
> On Mon, Mar 2, 2020 at 11:06 AM Terry Wang wrote:
>
> > +1 (non-binding).
> > With this feature, we can more easily interact traditional database in
> > flink.
> >
> > Best,
> > Te
LGTM now, +1 from my side.
Best,
Kurt
On Wed, Mar 4, 2020 at 12:27 AM Gary Yao wrote:
> Hi Yadong,
>
> Thank you for updating the wiki page.
>
> Only one minor suggestion – I would change:
>
> > If show-history is true return the information of attempt.
>
> to
>
> > If show-history is
Hi Dawid,
I have a couple of questions around key fields, actually I also have some
other questions but want to be focused on key fields first.
1. I don't fully understand the usage of "key.fields". Is this option only
valid during write operation? Because for
reading, I can't imagine how such op
20 at 4:42 PM Kurt Young wrote:
> Hi Dawid,
>
> I have a couple of questions around key fields, actually I also have some
> other questions but want to be focused on key fields first.
>
> 1. I don't fully understand the usage of "key.fields". Is this option only
+1
Best,
Kurt
On Wed, Mar 4, 2020 at 8:19 PM Gary Yao wrote:
> +1 (binding)
>
> Best,
> Gary
>
> On Wed, Mar 4, 2020 at 1:18 PM Yadong Xie wrote:
>
> > Hi all
> >
> > I want to start the vote for FLIP-100, which proposes to add attempt
> > information inside subtask and timeline in web UI.
>
These Github buttons sometimes can help me merge commits when the network
from China to Github is unstable. It would take me so long to fetch and
reorganize
commits locally, and fetch master, doing some rebase and then push. Each
step
is time consuming when network situation is bad.
So I would lik
Good to have such lovely discussions. I also want to share some of my
opinions.
#1 Regarding to error handling: I also think ignore invalid hints would be
dangerous, maybe
the simplest solution is just throw an exception.
#2 Regarding to property replacement: I don't think we should constraint
ou
, this can't solve 100% cases, but I guess can sovle 80% or 90%
> cases.
> And the remaining cases can be resolved by LIKE syntax which I guess is not
> very common cases.
>
> Best,
> Jark
>
>
> On Wed, 11 Mar 2020 at 10:33, Kurt Young wrote:
>
> > Good to have
as
> "SELECT
> >> * FROM mykafka /* faster_read_key=value*/ WHERE offset > 12pm yesterday"
> >> - done and satisfied, users submit it to production
> >>
> >>
> >> Regarding "CREATE TABLE t LIKE with (k1=v1, k2=v2), I
n well-defined long-running
> pipelines.
> > > > > > They should always have default values and can be missing in
> query. They
> > > > > > can be part of a table DDL/definition, but should also be
> replaceable in a
> > > > > > query
Hi, please use a dedicated vote thread.
Best,
Kurt
On Tue, Mar 17, 2020 at 10:36 AM jincheng sun
wrote:
> +1
>
> Best,
> Jincheng
>
>
>
> Wei Zhong 于2020年3月13日周五 下午9:04写道:
>
> > Hi all,
> >
> > I would like to start the vote for FLIP-106[1] which is discussed and
> > reached consensus in the
n-DDL-td38895.html
> >
>
> > 在 2020年3月17日,10:57,Kurt Young 写道:
> >
> > Hi, please use a dedicated vote thread.
> >
> > Best,
> > Kurt
> >
> >
> > On Tue, Mar 17, 2020 at 10:36 AM jincheng sun
> > wrote:
> >
> >> +1
> > > should
> > > > > support predicate pushdown.
> > > > >
> > > > > (2) Hints should not be a workaround for current shortcomings. A
> lot of
> > > > the
> > > > > suggested above sounds exactly like that.
Hi,
AFAIK there is no special watermark generation logic for temporal table
join operator. Could you share your example's codes then I can help to
analyze and debug?
Best,
Kurt
On Tue, Mar 17, 2020 at 9:53 PM Dominik Wosiński wrote:
> Hey Guys,
> I have observed a weird behavior on using the
Hi all,
Thanks for the discuss and feedbacks. I think this FLIP doesn't imply the
implementation
of such connector yet, it only describes the functionality and expected
behaviors from user's
perspective. Reusing current StreamingFileSink is definitely one of the
possible ways to
implement it. Sinc
Yes, I think we should move the `supportedHintOptions` from TableFactory
> >> to TableSourceFactory, and we also need to add the interface to
> >> TableSinkFactory though because sink target table may also have hints
> >> attached.
> >>
> >> Best,
>
Sinks since I think printing shows the issue better.
>
>
> Best Regards,
> Dom.
>
>
> śr., 18 mar 2020 o 08:13 Kurt Young napisał(a):
>
>> Hi,
>>
>> AFAIK there is no special watermark generation logic for temporal table
>> join operator. Could you
Thanks Timo for the design doc.
In general I'm +1 to this, with a minor comment. Since we introduced dozens
interfaces all at once,
I'm not sure if it's good to annotate them with @PublicEnvolving already. I
can imagine these interfaces
would only be stable after 1 or 2 major release. Given the f
Hi Becket,
I don't think DataStream should see some SQL specific concepts such as
Filtering or ComputedColumn. It's
better to stay within SQL area and translate to more generic concept when
translating to DataStream/Runtime
layer, such as use MapFunction to represent computed column logic.
Best,
on't think DataStream should see some SQL specific concepts such
> as
> > > >
> > > > Filtering or ComputedColumn.
> > > >
> > > > Projectable and Filterable seems not necessarily SQL concepts, but
> > could
> > > be
> > >
e this method up in the future. Currently, I
> >>>> don't see a need for catalogs or formats. Because how would you target
> >> a
> >>>> format in the query?
> >>>>
> >>>> @Danny: Can you send a link to your PoC? I'm
+1
Best,
Kurt
On Fri, Mar 27, 2020 at 10:51 AM Jingsong Li wrote:
> Hi everyone,
>
> I'd like to start the vote of FLIP-115 [1], which introduce Filesystem
> table factory in table. This FLIP is discussed in the thread[2].
>
> The vote will be open for at least 72 hours. Unless there is an obj
Hi Jark,
Thanks for the proposal, I'm +1 to the general idea. However I have a
question about "version",
in the old design, the version seems to be aimed for tracking property
version, with different
version, we could evolve these step by step without breaking backward
compatibility. But in this
d
3) If we use "kafka-0.11" as connector identifier, we may have to write a
> full documentation for each version, because they are different
> "connector"?
> IMO, for 0.11, 0.11, etc... kafka, they are actually the same connector
> but with different "client jar&
1 - 100 of 460 matches
Mail list logo