-dist, but I cannot
find it in
the binary distribution of RC2.
Best,
Kurt
On Thu, Aug 15, 2019 at 6:19 PM Kurt Young wrote:
> Hi Gordon & Timo,
>
> Thanks for the feedback, and I agree with it. I will document this in the
> release notes.
>
> Best,
> Kurt
>
>
> O
Kurt Young created FLINK-13736:
--
Summary: Support count window with blink planner in batch mode
Key: FLINK-13736
URL: https://issues.apache.org/jira/browse/FLINK-13736
Project: Flink
Issue Type
Kurt Young created FLINK-13735:
--
Summary: Support session window with blink planner in batch mode
Key: FLINK-13735
URL: https://issues.apache.org/jira/browse/FLINK-13735
Project: Flink
Issue
n. We can
> > fix this in a minor release shortly after.
> >
> > What do others think?
> >
> > Regards,
> > Timo
> >
> >
> > Am 15.08.19 um 11:23 schrieb Kurt Young:
> > > HI,
> > >
> > > We just find a serious bug ar
; I think it’s an OK restriction to have for now
> > - in all the testing I had fine-grained recovery (region failover)
> > enabled but I didn’t simulate any failures
> >
> > > On 14. Aug 2019, at 15:20, Kurt Young wrote:
> > >
> > > Hi,
> > >
>
Congratulations Andery!
Best,
Kurt
On Thu, Aug 15, 2019 at 10:09 AM Biao Liu wrote:
> Congrats!
>
> Thanks,
> Biao /'bɪ.aʊ/
>
>
>
> On Thu, 15 Aug 2019 at 10:03, Jark Wu wrote:
>
> > Congratulations Andrey!
> >
> >
> > Cheers,
> > Jark
> >
> > On Thu, 15 Aug 2019 at 00:57, jincheng sun
> > w
gt; - in all the testing I had fine-grained recovery (region failover)
> > enabled but I didn’t simulate any failures
> >
> > > On 14. Aug 2019, at 15:20, Kurt Young wrote:
> > >
> > > Hi,
> > >
> > > Thanks for preparing this release cand
nically
> > > > > > > >>> speaking it is fine to change it later. It is just better
> if
> > we
> > > > > could
> > > > > > > >>> avoid
> > > > > > > >>> doing that.
> > > > > > > >>>
> > > >
+1 (binding)
Best,
Kurt
On Wed, Aug 14, 2019 at 1:34 AM Yun Tang wrote:
> +1 (non-binding)
>
> But I have a minor question about "code change" action, for those
> "[hotfix]" github pull requests [1], the dev mailing list would not be
> notified currently. I think we should change the descripti
cc user-zh mailing list, since there are lots of chinese speaking people.
Best,
Kurt
On Tue, Aug 13, 2019 at 4:02 PM WangHengwei wrote:
> Hi all,
>
>
> I'm working on [FLINK-13405] Translate "Basic API Concepts" page into
> Chinese. I have a problem.
>
> Usually we translate "Data Sourc
Hi Zili,
Thanks for the heads up. The 2 issues you mentioned were opened by me. We
have
found the reason of the second issue and a PR was opened for it. As said in
jira, the
issue was just a testing problem, should not be blocker of 1.9.0 release.
However,
we will still merge it into 1.9 branch.
Kurt Young created FLINK-13688:
--
Summary: HiveCatalogUseBlinkITCase.testBlinkUdf constantly failed
with 1.9.0-rc2
Key: FLINK-13688
URL: https://issues.apache.org/jira/browse/FLINK-13688
Project: Flink
Kurt Young created FLINK-13687:
--
Summary: elasticsearch5.ElasticsearchSinkITCase constantly fail
with 1.9.0-rc2
Key: FLINK-13687
URL: https://issues.apache.org/jira/browse/FLINK-13687
Project: Flink
Hi Stephan,
Thanks for bringing this up. I think it's important and a good time to
discuss what
does *feature freeze* really means. At least to me, seems I have some
misunderstandings with this comparing to other community members. But as
you
pointed out in the jira and also in this mail, I think
> # gpg --batch --keyserver "$server" --recv-keys "$GPG_KEY" && break
> || : ; \
> # done && \
> # gpg --batch --verify flink.tgz.asc flink.tgz; \
> # gpgconf --kill all; \
> # rm -rf "$GNUPGHOME" flink.tgz.asc; \
> # \
>
+1 to include this in 1.9.0, adding some examples doesn't look like new
feature to me.
BTW, I am also trying this tutorial based on release-1.9 branch, but
blocked by:
git clone --branch release-1.10-SNAPSHOT
g...@github.com:apache/flink-playgrounds.git
Neither 1.10 nor 1.9 exists in flink-playgr
Congrats Hequn!
Best,
Kurt
On Wed, Aug 7, 2019 at 5:06 PM jincheng sun
wrote:
> Hi everyone,
>
> I'm very happy to announce that Hequn accepted the offer of the Flink PMC
> to become a committer of the Flink project.
>
> Hequn has been contributing to Flink for many years, mainly working on
>
Kurt Young created FLINK-13592:
--
Summary: test_tpch.sh should not hardcode flink version
Key: FLINK-13592
URL: https://issues.apache.org/jira/browse/FLINK-13592
Project: Flink
Issue Type: Bug
Kurt Young created FLINK-13591:
--
Summary: 'Completed Job List' in Flink web doesn't display right
when job name is very long
Key: FLINK-13591
URL: https://issues.apache.org/jira/bro
Update: RC1 for 1.9.0 has been created. Please see [1] for the preview
source / binary releases and Maven artifacts.
Best,
Kurt
[1]
http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/PREVIEW-Apache-Flink-1-9-0-release-candidate-1-td31233.html
On Tue, Jul 30, 2019 at 2:36 PM Tzu-Li (
Hi Flink devs,
RC1 for Apache Flink 1.9.0 has been created. Just as RC0, this is still
a preview-only RC to drive the current testing efforts. This has all the
artifacts that we would typically have for a release, except for a source
code tag and a PR for the release announcement.
RC1 contains th
Thanks everyone!
It is really exciting and honored to be part of such a great community.
Looking forward to continue to push Flink forward!
Best,
Kurt
On Wed, Jul 24, 2019 at 9:07 AM Becket Qin wrote:
> Congrats, Kurt. Well deserved!
>
> Jiangjie (Becket) Qin
>
> On Wed, Jul 24, 2019 at 1:11
Thanks Dawid for driving this discussion.
Personally, I would +1 for using option #2 for 1.9.0 and go with option #1
in 1.10.0.
Regarding Xuefu's concern about option #1, I think we could also try to
reuse the in-memory catalog
for the builtin temporary table storage.
Regarding to option #2 and o
Congratulations Zhijiang!
Best,
Kurt
On Tue, Jul 23, 2019 at 8:59 AM Biao Liu wrote:
> Congrats Zhijiang. Well deserved!
>
> SHI Xiaogang 于2019年7月23日 周二08:35写道:
>
> > Congratulations Zhijiang!
> >
> > Regards,
> > Xiaogang
> >
> > Guowei Ma 于2019年7月23日周二 上午8:08写道:
> >
> > > Congratulations Zh
Congrats Becket!
Best,
Kurt
On Thu, Jul 18, 2019 at 4:12 PM JingsongLee
wrote:
> Congratulations Becket!
>
> Best, Jingsong Lee
>
>
> --
> From:Congxian Qiu
> Send Time:2019年7月18日(星期四) 16:09
> To:dev@flink.apache.org
> Subject:R
Sorry about that and thanks Gordon for fixing this!
Best,
Kurt
On Mon, Jul 15, 2019 at 5:43 PM Tzu-Li (Gordon) Tai
wrote:
> Done.
>
> Thanks for the reminder and help with the Jenkins deployment setup!
>
> Cheers,
> Gordon
>
> On Mon, Jul 15, 2019 at 3:54 PM Chesnay Schepler
> wrote:
>
>> Ple
Hi devs,
I just created the branch for the Flink 1.9 release [1] and updated the
version on master to 1.10-SNAPSHOT. This unblocks the master from
merging new features into it.
If you are working on a 1.9 relevant bug fix, then it is important to merge
it into the release-1.9 and master branch.
Kurt Young created FLINK-13238:
--
Summary: Reduce blink planner's testing time
Key: FLINK-13238
URL: https://issues.apache.org/jira/browse/FLINK-13238
Project: Flink
Issue Type: Improv
o wrote:
> >
> > Any news on this?
> >
> > Thanks,
> > Qi
> >
> >> On Jul 11, 2019, at 11:13 PM, Stephan Ewen wrote:
> >>
> >> Number (6) is not a feature but a bug fix, so no need to block on
> that...
> >>
> >> On
Kurt Young created FLINK-13234:
--
Summary: TemporalTypesTest randomly failed on travis
Key: FLINK-13234
URL: https://issues.apache.org/jira/browse/FLINK-13234
Project: Flink
Issue Type: Bug
Congratulations Rong!
Best,
Kurt
On Thu, Jul 11, 2019 at 10:53 PM Kostas Kloudas wrote:
> Congratulations Rong!
>
> On Thu, Jul 11, 2019 at 4:40 PM Jark Wu wrote:
>
>> Congratulations Rong Rong!
>> Welcome on board!
>>
>> On Thu, 11 Jul 2019 at 22:25, Fabian Hueske wrote:
>>
>>> Hi everyone,
ies to StreamGraph
> > 5. Set resource profiles to task and enable managed memory as resource
> > profile
> >
> > Best,
> > Kurt
> >
> >
> > On Fri, Jul 5, 2019 at 9:37 PM Kurt Young wrote:
> >
> >> Hi devs,
> >>
> >> It'
Kurt Young created FLINK-13221:
--
Summary: Blink planner should set ScheduleMode to
LAZY_FROM_SOURCES_WITH_BATCH_SLOT_REQUEST for batch jobs
Key: FLINK-13221
URL: https://issues.apache.org/jira/browse/FLINK-13221
Kurt Young created FLINK-13212:
--
Summary: Unstable ChainLengthIncreaseTest
Key: FLINK-13212
URL: https://issues.apache.org/jira/browse/FLINK-13212
Project: Flink
Issue Type: Test
Kurt Young created FLINK-13208:
--
Summary: Add Notice file for upgrading calcite to 1.20
Key: FLINK-13208
URL: https://issues.apache.org/jira/browse/FLINK-13208
Project: Flink
Issue Type: Task
Kurt Young created FLINK-13202:
--
Summary: Unstable StandaloneResourceManagerTest
Key: FLINK-13202
URL: https://issues.apache.org/jira/browse/FLINK-13202
Project: Flink
Issue Type: Test
Kurt Young created FLINK-13201:
--
Summary: Unstable sql time udf test
Key: FLINK-13201
URL: https://issues.apache.org/jira/browse/FLINK-13201
Project: Flink
Issue Type: Test
Components
>>>> >>>
> >>>> >>>> >>> Besides, I don't think that's the ultimate reason
> >>>> for lack of
> >>>> >>>> build
> >>>> >>>>
okup function and upsert sink
4. StreamExecutionEnvironment supports executing job with StreamGraph, and
blink planner should set proper properties to StreamGraph
5. Set resource profiles to task and enable managed memory as resource
profile
Best,
Kurt
On Fri, Jul 5, 2019 at 9:37 PM Kurt Young w
;> Gordon
>>
>> On Tue, Jun 25, 2019 at 9:05 PM Chesnay Schepler
>> wrote:
>>
>> > On the fine-grained recovery / batch scheduling side we could make good
>> > use of another week.
>> > Currently we are on track to have the _feature_ merged, b
as.
Let me know if this makes sense to you.
Best,
Kurt
On Thu, Jul 4, 2019 at 4:32 PM jincheng sun
wrote:
> Hi All,
>
> @Kurt Young one user-defined table aggregate function
> can be used in both with(out) keys case, and we do not introduce any other
> aggregations. just li
t;>>>>>> harder to
> >>>>>>>>>> accomplish in a short period of time and may deserve
> >>>its own
> >>>>>>> separate
> >>>>>>>>>> discussion". Thus I didn't incl
gregate:
> >>>>>>> > input.localKeyBy(..).aggregate(agg1).keyBy(..).aggregate(agg2)
> >>>>>>> **NOT
> >>>>>>> > SUPPORT**
> >>>>>>> > b) For windowed aggregate:
> >>>>>>> >
>
Thanks for being the release manager and great job! @Jincheng
Best,
Kurt
On Wed, Jul 3, 2019 at 10:19 AM Tzu-Li (Gordon) Tai
wrote:
> Thanks for being the release manager @jincheng sun
> :)
>
> On Wed, Jul 3, 2019 at 10:16 AM Dian Fu wrote:
>
>> Awesome! Thanks a lot for being the release ma
ssing does not involve semantic
> changes. The definition of keys is to support non-window flatAggregate on
> upsert mode. (The upsert mode is already supported in the flink framework.
> The current discussion only needs to inform the framework that the keys
> information, which is the `
Hi,
I have a question about the key information of TableAggregateFunction.
IIUC, you need to define
something like primary key or unique key in the result table of
TableAggregateFunction, and also
need a way to let user configure this through the API. My question is, will
that effect the logic of
+1 for sticking to the lazy majority voting. Especially for the reason that
if all committers don't have
time capacity to help discuss and review the changes which bring up by the
FLIP, it will be meaningless
for this FLIP to be considered as accepted.
I don't have much suggestions about the scope
Hi Aljoscha,
I also feel an additional week can make the remaining work more easy. At
least
we don't have to check in lots of commits in both branches (master &
release-1.9).
Best,
Kurt
On Tue, Jun 25, 2019 at 8:27 PM Aljoscha Krettek
wrote:
> A few threads are converging around supporting th
(Forgot to cc George)
Best,
Kurt
On Tue, Jun 25, 2019 at 10:16 AM Kurt Young wrote:
> Hi Bowen,
>
> Thanks for bringing this up. We actually have discussed about this, and I
> think Till and George have
> already spend sometime investigating it. I have cced both of them, and
&
Hi Bowen,
Thanks for bringing this up. We actually have discussed about this, and I
think Till and George have
already spend sometime investigating it. I have cced both of them, and
maybe they can share
their findings.
Best,
Kurt
On Tue, Jun 25, 2019 at 10:08 AM Jark Wu wrote:
> Hi Bowen,
>
>
Congratulations Jincheng!
Best,
Kurt
On Tue, Jun 25, 2019 at 9:56 AM LakeShen wrote:
> Congratulations! Jincheng Sun
>
> Best,
> LakeShen
>
> Robert Metzger 于2019年6月24日周一 下午11:09写道:
>
> > Hi all,
> >
> > On behalf of the Flink PMC, I'm happy to announce that Jincheng Sun is
> now
> > part of
Hi vino,
One thing to add, for a), I think use one or two examples like how to do
local aggregation on a sliding window,
and how do we do local aggregation on an unbounded aggregate, will do a lot
help.
Best,
Kurt
On Mon, Jun 24, 2019 at 6:06 PM Kurt Young wrote:
> Hi vino,
>
>
In API level, we have answered your question about pass an
> AggregateFunction to do the aggregation. No matter introduce localKeyBy API
> or not, we can support AggregateFunction.
>
> So what's your different opinion now? Can you share it with us?
>
> Best,
> Vino
>
>
ic is very complicated and optimization does not matter, I
> > think it's a better choice to provide a relatively low-level and
> canonical
> > interface.
> >
> > The composited interface, on the other side, may be a good choice in
> > declarative interfaces, including
e.
>>
>> Best,
>> Vino.
>>
>> Robert Metzger 于2019年6月20日周四 下午10:59写道:
>>
>>> Thanks a lot!
>>>
>>> qq.com belongs to Tencent, right?
>>> As far as I know, we have some active contributors working at Tencent
>>> (Vino
20, 2019 at 10:23 PM Kurt Young wrote:
> Thanks Robert, I left a comment in the JIRA you gave and see what will
> happen.
>
> Best,
> Kurt
>
>
> On Thu, Jun 20, 2019 at 9:04 PM Robert Metzger
> wrote:
>
>> Thank you all for working on this!
>>
>>
(see a similar
> example, also mentioning qq.com:
> https://issues.apache.org/jira/browse/INFRA-18249)
>
>
>
> On Thu, Jun 20, 2019 at 6:23 AM Kurt Young wrote:
>
> > Is there any chance that we can contact Apache infra team to find out why
> > apache mails are blocked by
Is there any chance that we can contact Apache infra team to find out why
apache mails are blocked by qq.com?
QQ mails are very popular in Chinese.
Best,
Kurt
On Thu, Jun 20, 2019 at 12:01 PM Hequn Cheng wrote:
> Hi Gordon,
>
> Thanks a lot for providing the valuable information!
> As I carry
; > > Thanks for your comments.
> > >
> > > I agree that we can provide a better abstraction to be compatible with
> > two
> > > different implementations.
> > >
> > > First of all, I think we should consider what kind of scenarios we need
> > to
>
would be probably a better choice.
>
> Because of that, I think we should eventually provide both versions and in
> the initial version we should at least make the “local aggregation engine”
> abstract enough, that one could easily provide different implementation
> strategy.
>
>
, which can
>benefit from the asnyc snapshot and incremental checkpoint. IMO, the
>performance is not a problem, and we also do not find the performance
> issue
>in our production.
>
> [1]:
>
> http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCU
Hi dev,
I noticed that all the travis tests triggered by pull request are failed
with the same error:
"Cached flink dir /home/travis/flink_cache/x/flink does not exist.
Exiting build."
Anyone have a clue on what happened and how to fix this?
Best,
Kurt
exact API in DataStream named localKeyBy, about the pre-aggregation we need
> to define the trigger mechanism of local aggregation, so we find reused
> window API and operator is a good choice. This is a reasoning link from
> design to implementation.
>
> What do you think?
>
nfigure
>the trigger threshold (maybe memory availability?), this design cannot
>guarantee a deterministic semantics (it will bring trouble for testing
> and
>debugging).
> - if the implementation depends on the timing of checkpoint, it would
>affect the che
Hi Vino,
Thanks for the proposal, I like the general idea and IMO it's very useful
feature.
But after reading through the document, I feel that we may over design the
required
operator for proper local aggregation. The main reason is we want to have a
clear definition and behavior about the "local
Big +1 and thanks for preparing this.
I think wha't more important is making sure most all the contributors can
follow
the same guide, a clear document is definitely a great start. Committers can
first try to follow the guide by self, and spread the standard during code
reviewing.
Best,
Kurt
On
Thanks Gordon for bringing this up.
I'm glad to say that blink planner merge work is almost done, and i will
follow up the work of
integrating blink planner with Table API to co-exist with current flink
planner.
In addition to this, the following features:
1. FLIP-32: Restructure flink-table for
Thanks Jark for bringing this topic. I think proper concepts is very
important for users who are using Table API & SQL. Especially for
them to have a clear understanding about the behavior of the SQL job. Also
this is essential for connector developers to have a better
understanding why we abstract
+1 to add benchmark component.
Best,
Kurt
On Wed, May 15, 2019 at 6:13 PM Piotr Nowojski wrote:
> Hi,
>
> I would like to propose two changes:
>
> 1. Renaming “Runtime / Operators” to “Runtime / Task” or something like
> “Runtime / Processing”. “Runtime / Operators” was confusing me, since it
Kurt Young created FLINK-12506:
--
Summary: Add more over window unit tests
Key: FLINK-12506
URL: https://issues.apache.org/jira/browse/FLINK-12506
Project: Flink
Issue Type: Improvement
Also +1 to support JDBC.
Best,
Kurt
On Wed, Apr 17, 2019 at 7:38 PM Stephan Ewen wrote:
> I think this problem sounds fixable. Having proper JDBC support through the
> SQL client would be really cool!
>
> Adding Timo and Shaoxuan here:
>
> Let's assume that the "collect()" call supports large
Replied in user mailing list.
Best,
Kurt
On Mon, Apr 15, 2019 at 11:48 PM Felipe Gutierrez <
felipe.o.gutier...@gmail.com> wrote:
> Hi,
>
> I am trying to use the Blink implementation for "MapBundleFunction
> <
> https://github.com/felipegutierrez/explore-blink/blob/master/src/main/java/org/sen
Kurt Young created FLINK-12088:
--
Summary: Introduce unbounded streaming inner join operator
Key: FLINK-12088
URL: https://issues.apache.org/jira/browse/FLINK-12088
Project: Flink
Issue Type
Kurt Young created FLINK-12062:
--
Summary: Introduce bundle operator to streaming table runtime
Key: FLINK-12062
URL: https://issues.apache.org/jira/browse/FLINK-12062
Project: Flink
Issue Type
Kurt Young created FLINK-12061:
--
Summary: Add more window operator contract tests to table runtime
Key: FLINK-12061
URL: https://issues.apache.org/jira/browse/FLINK-12061
Project: Flink
Issue
Big +1 to this! I left some comments in google doc.
Best,
Kurt
On Wed, Mar 27, 2019 at 11:32 PM Timo Walther wrote:
> Hi everyone,
>
> some of you might have already read FLIP-32 [1] where we've described an
> approximate roadmap of how to handle the big Blink SQL contribution and
> how we can
+1 (non-binding)
Checked items:
- checked checksums and GPG files
- verified that the source archives do not contains any binaries
- checked that all POM files point to the same version
- build from source successfully
Best,
Kurt
On Tue, Mar 26, 2019 at 10:57 AM Shaoxuan Wang wrote:
> +1 (non
t;>
> >> How about:
> >>
> >> Table SQL / API
> >> Table SQL / Client
> >> Table SQL / Legacy Planner: Flink Table SQL runtime and plan
> translation.
> >> Table SQL / New Planner: plan-related for new Blink-based Table SQL
> runner.
>
Hi Aljoscha,
+1 to further separate table-relate jira components, but I would prefer to
move "Runtime / Operators" to a dedicated "Table SQL / Operators".
There is one concern about the "classic planner" and "new planner", the
naming will be inaccurate after blink merge done and we deprecated clas
Kurt Young created FLINK-11959:
--
Summary: Introduce window operator for blink streaming runtime
Key: FLINK-11959
URL: https://issues.apache.org/jira/browse/FLINK-11959
Project: Flink
Issue Type
Kurt Young created FLINK-11930:
--
Summary: Split SegmentsUtil into some dedicated utilities
Key: FLINK-11930
URL: https://issues.apache.org/jira/browse/FLINK-11930
Project: Flink
Issue Type
Kurt Young created FLINK-11927:
--
Summary: Code clean up and refactor BinaryHashTable and
LongHybridHashTable
Key: FLINK-11927
URL: https://issues.apache.org/jira/browse/FLINK-11927
Project: Flink
+1 (non-binding)
Checked items:
- checked checksums and GPG files
- verified that the source archives do not contains any binaries
- checked that all POM files point to the same version
- build from source
Best,
Kurt
On Tue, Mar 12, 2019 at 9:20 AM Congxian Qiu wrote:
> +1 (non-binding)
>
> C
Kurt Young created FLINK-11872:
--
Summary: update lz4 license file
Key: FLINK-11872
URL: https://issues.apache.org/jira/browse/FLINK-11872
Project: Flink
Issue Type: Improvement
Kurt Young created FLINK-11871:
--
Summary: Introduce LongHashTable to improve performance when join
key fits in long
Key: FLINK-11871
URL: https://issues.apache.org/jira/browse/FLINK-11871
Project: Flink
Kurt Young created FLINK-11864:
--
Summary: Let compressed channel reader/writer reuse the logic of
AsynchronousFileIOChannel
Key: FLINK-11864
URL: https://issues.apache.org/jira/browse/FLINK-11864
Kurt Young created FLINK-11863:
--
Summary: Introduce channel to read and write compressed data
Key: FLINK-11863
URL: https://issues.apache.org/jira/browse/FLINK-11863
Project: Flink
Issue Type
Kurt Young created FLINK-11858:
--
Summary: Introduce block compressor/decompressor for batch table
runtime
Key: FLINK-11858
URL: https://issues.apache.org/jira/browse/FLINK-11858
Project: Flink
Kurt Young created FLINK-11856:
--
Summary: Introduce BinaryHashTable and LongHashTable to batch
table runtime
Key: FLINK-11856
URL: https://issues.apache.org/jira/browse/FLINK-11856
Project: Flink
And sometimes just reimport maven will work.
Right click pom.xml located in Flink's root dir -> Maven -> Reimport
Best,
Kurt
On Wed, Mar 6, 2019 at 8:02 PM Chesnay Schepler wrote:
> Usually when I run into this i use "File -> Invalidate Caches /
> Restart... -> Invalidate and restart"
>
> On
Hi Dev,
I've been using the flinkbot and the label for a couple days, it worked
really well! I have a minor suggestion, can we
use different colors for different labels? We don't need to have different
colors for every label, but only to distinguish whether
someone had review the PR.
For example,
Kurt Young created FLINK-11832:
--
Summary: Convert some InternalType property related functions to
InternalType's method
Key: FLINK-11832
URL: https://issues.apache.org/jira/browse/FLINK-11832
Pr
Kurt Young created FLINK-11831:
--
Summary: Separate CodeGeneratorContext for different generation
targets
Key: FLINK-11831
URL: https://issues.apache.org/jira/browse/FLINK-11831
Project: Flink
; As far as I understood, the idea is that the SQL API and programming APIs
> share the same set of operators (in the long run).
> With 1240 issues, the "API / Table SQL" component is one of the largest
> components. It definitively makes sense to split the SQL-related issues
> into a
Sorry to join the discussion after all things seems been settled down
already, but i noticed we may need another component: "SQL / operators".
Do we still have chance to add it?
Best,
Kurt
On Thu, Feb 28, 2019 at 5:59 PM Robert Metzger wrote:
> Okay, I will go with "Connectors / Misc" if nobod
it?ts=5c6a613e#heading=h.j0v7ubhwfl0y
>
> [3]
>
> https://github.com/apache/flink/blob/blink/flink-libraries/flink-table/src/main/java/org/apache/flink/table/runtime/window/WindowOperator.java
>
> [4]
>
> https://ci.apache.org/projects/flink/flink-docs-release-1.7/dev/stream
Thanks Aljoscha!
Best,
Kurt
On Mon, Feb 25, 2019 at 5:43 PM jincheng sun
wrote:
> Thanks, Aljoscha!
> That's very exciting, which means that the Flink community will have as a
> big enhanced version coming soon!
>
> Cheers,
> Jincheng
>
> Aljoscha Krettek 于2019年2月25日周一 下午5:27写道:
>
> > Hi Ever
Kurt Young created FLINK-11674:
--
Summary: Add an initial Blink SQL code generator
Key: FLINK-11674
URL: https://issues.apache.org/jira/browse/FLINK-11674
Project: Flink
Issue Type: Sub-task
Kurt Young created FLINK-11675:
--
Summary: Add an initial support for running batch jobs with
streaming runtime
Key: FLINK-11675
URL: https://issues.apache.org/jira/browse/FLINK-11675
Project: Flink
Hi Rong,
Thanks for the improvement proposal, this topic aroused my interest since
we did some similar improvements in Blink.
After going through your design doc, i would like share some thoughts:
1. It looks to me the proposed SliceManager and MergeManager is quite
generic and can be used in var
301 - 400 of 460 matches
Mail list logo