Hi,
I worked with Konstantin and reviewed the PR.
I think the playground is a great way to get started with Flink and explore
it's recovery mechanism and unique features like savepoints.
I'm in favor of adding the required streaming example program for the 1.9
release unless there's a good
+1 to include this in 1.9.0, adding some examples doesn't look like new
feature to me.
BTW, I am also trying this tutorial based on release-1.9 branch, but
blocked by:
git clone --branch release-1.10-SNAPSHOT
g...@github.com:apache/flink-playgrounds.git
Neither 1.10 nor 1.9 exists in
[Forking off this thread to keep the announce thread "clean"]
Hi Kurt,
The playground needs a bit of manual work at the moment, because 1.9 is not
released yet.
The docker-compose and Flink configurations are still in a PR [1].
Also the Flink 1.9 Docker containers need to manually build. When
Before backporting the playground PR to the release-1.9, I'd like to
understand why the ClickEventCount job needs to be part of the Flink
distribution. Looking at the example, it seems to only work in combination
with a Kafka cluster. Since it is not self-contained, it does not add much
value for
Congratulations Hequn! Well deserved!
Best Regards,
Yu
On Thu, 8 Aug 2019 at 03:53, Haibo Sun wrote:
> Congratulations!
>
> Best,
> Haibo
>
> At 2019-08-08 02:08:21, "Yun Tang" wrote:
> >Congratulations Hequn.
> >
> >Best
> >Yun Tang
> >
> >From: Rong Rong
>
wangxiyuan created FLINK-13646:
--
Summary: Add ARM CI job definition scripts
Key: FLINK-13646
URL: https://issues.apache.org/jira/browse/FLINK-13646
Project: Flink
Issue Type: Sub-task
Thanks for the detailed instructions!
Best,
Kurt
On Thu, Aug 8, 2019 at 3:40 PM Fabian Hueske wrote:
> [Forking off this thread to keep the announce thread "clean"]
>
> Hi Kurt,
>
> The playground needs a bit of manual work at the moment, because 1.9 is
> not released yet.
> The
Till Rohrmann created FLINK-13647:
-
Summary: Allow default methods and static methods to be added to
public interfaces
Key: FLINK-13647
URL: https://issues.apache.org/jira/browse/FLINK-13647
Project:
Hi all!
I would like to bring this topic up, because we saw quite a few "secret"
post-feature-freeze feature merges.
The latest example was https://issues.apache.org/jira/browse/FLINK-13225
I would like to make sure that we are all on the same page on what a
feature freeze means and how to
Jark Wu created FLINK-13648:
---
Summary: Support "IS NOT DISTINCT FROM" operator in lookup join
Key: FLINK-13648
URL: https://issues.apache.org/jira/browse/FLINK-13648
Project: Flink
Issue Type: New
Hi Stephan,
Thanks for bringing this up. I think it's important and a good time to
discuss what
does *feature freeze* really means. At least to me, seems I have some
misunderstandings with this comparing to other community members. But as
you
pointed out in the jira and also in this mail, I think
Timo Walther created FLINK-13649:
Summary: Improve error message when job submission was not
successful
Key: FLINK-13649
URL: https://issues.apache.org/jira/browse/FLINK-13649
Project: Flink
Hi everyone,
As you might know, some of us are currently working on Docker-based
playgrounds that make it very easy for first-time Flink users to try out
and play with Flink [0].
Our current setup (still work in progress with some parts merged to the
master branch) looks as follows:
* The
Hi, all
sorry to resend a email with correct title .
Found a fatal bug starting from Flink 1.6, which cause Flink Table API can
not correctly extract table schema .
Jira
https://issues.apache.org/jira/projects/FLINK/issues/FLINK-13603?filter=allopenissues
there is a change on Flink-core ->
Hey,
I retract my +1 (at least temporarily, until we discuss about alternative
solutions).
>> I would like to also raise an additional issue: currently quite some bugs
>> (like release blockers [1]) are being discovered by ITCases of the
>> connectors. It means that at least initially, the
Again, feature freeze is not about "what was planned", it is a about what
is ready. Otherwise, it is completely unplannable when a release would come.
Everyone has a pet feature they want to see in. If everyone just makes
decisions by themselves and pushes, we can never get anywhere.
Disagreement
I pretty much agree with your points Dav/wid. Some problems which we want
to solve with a respository split are clearly caused by the existing build
system (no incremental builds, not enough flexibility to only build a
subset of modules). Given that a repository split would be a major
endeavour
One more thing to add.
If we move the code to flink-playgrounds and build custom images, the
playgrounds effort won't be tied to the Flink 1.9 release any more.
So, we'd be a bit more flexible time-wise but would also need to manually
update the playgrounds for every release.
Am Do., 8. Aug. 2019
Hi,
First of all, I agree with Dawid and David's point.
I will share some experience on the repository split. We have been through
it for Alibaba Blink, which is the most worthwhile project to learn from I
think.
We split Blink project into "blink-connectors" and "blink", but we didn't
get much
OK, let's stop the discussion about the playground in the release 1.9
thread.
I've started a new thread on dev@f.a.o to continue the discussion [1].
Best, Fabian
[1]
https://lists.apache.org/thread.html/4f54c0b4162e3db8626afdca5c354050282282d3cc229d01f2d8ca3e@%3Cdev.flink.apache.org%3E
Am Do.,
I remember that Patrick (who maintained the docker-flink images so far)
frequently raised the point that its good practice to have the images
decoupled from the project release cycle.
Changes to the images can be done frequently and released fast that way.
In addition, one typically supports
Sergei Winitzki created FLINK-13658:
---
Summary: Combine two triggers into one (for streaming windows)
Key: FLINK-13658
URL: https://issues.apache.org/jira/browse/FLINK-13658
Project: Flink
Hi Timo,
Thanks for sharing your opinion. By wastefulness, I meant we had planned
and done much work ending up with not being useful in the released product.
Instead of making many partial features, we'd rather make fewer but
complete features. We expected a good integration with Hive in 1.9, but
+1 for the motivation, -1 for the solution as all of the problems mention
above can be addressed with the mono-repo as well.
Multiple repositories:
1) This creates a big pain in case of change that targets code base in
multiple repositories. Change needs to be split in multiple PRs, that need
to
Hey Fabian,
I support option 1.
As per FLIP-42, playgrounds are going to become core to flinks getting started
experience and I believe it is worth the effort to get this right.
- As you mentioned, we may (and in my opinion definitely will) add more images
in the future. Setting up an
Hi,
To subscribe dev list, you should mail to dev-subscr...@flink.apache.org
instead of dev@flink.apache.org. An automatic reply would be sent and you
just reply it to subscribe dev list.
Best,
tison.
疯琴 <35023...@qq.com> 于2019年8月9日周五 上午7:51写道:
> I didn't receive any message from you for more
Jark Wu created FLINK-13661:
---
Summary: Add a stream specific CREATE TABLE SQL DDL
Key: FLINK-13661
URL: https://issues.apache.org/jira/browse/FLINK-13661
Project: Flink
Issue Type: Sub-task
Jeff Zhang created FLINK-13659:
--
Summary: Add method listDatabases(catalog) and listTables(catalog,
database) in TableEnvironment
Key: FLINK-13659
URL: https://issues.apache.org/jira/browse/FLINK-13659
Congratulations Hequn!
Best,
Yun
--
From:Congxian Qiu
Send Time:2019 Aug. 8 (Thu.) 21:34
To:Yu Li
Cc:Haibo Sun ; dev ; Rong Rong
; user
Subject:Re: Re: [ANNOUNCE] Hequn becomes a Flink committer
Congratulations Hequn!
Best,
MalcolmSanders created FLINK-13660:
--
Summary: Cannot submit job on Flink session cluster on kubernetes
with multiple JM pods (zk HA) through web frontend
Key: FLINK-13660
URL:
Hi devs,
Flink uses inverted class loading by default to allow a different version of
dependencies in user codes, but currently this approach is not applied to the
client, so I’m wondering if it’s out of some special reason?
If not, I think it would be great to add inverted class loading as
I didn't receive any message from you for more than a day.
LiJun created FLINK-13655:
-
Summary: Caused by: java.io.IOException: Thread 'SortMerger
spilling thread' terminated due to an exception
Key: FLINK-13655
URL: https://issues.apache.org/jira/browse/FLINK-13655
First of all I don't have much(if not at all) experience with working
with a multi repository project of Flink's size. I would like to mention
a few thoughts of mine, though. In general I am slightly against
splitting the repository. I fear that what we actually want to do is to
introduce double
> I would like to also raise an additional issue: currently quite some
bugs (like release blockers [1]) are being discovered by ITCases of the
connectors. It means that at least initially, the main repository will
lose some test coverage.
True, but I think this is more a symptom of us not
Thanks for the update and driving the discussion Becket!
+1 for starting a vote.
Am Mi., 7. Aug. 2019 um 11:44 Uhr schrieb Becket Qin :
> Thanks Stephan.
>
> I think we have resolved all the comments on the wiki page. There are two
> minor changes made to the bylaws since last week.
> 1. For 2/3
Jark Wu created FLINK-13657:
---
Summary: Remove FlinkJoinToMultiJoinRule pull-in from Calcite
Key: FLINK-13657
URL: https://issues.apache.org/jira/browse/FLINK-13657
Project: Flink
Issue Type:
Xiangfu Lee created FLINK-13654:
---
Summary: Wrong word used in comments in the class
Key: FLINK-13654
URL: https://issues.apache.org/jira/browse/FLINK-13654
Project: Flink
Issue Type: Bug
Jark Wu created FLINK-13656:
---
Summary: Upgrade Calcite dependency to 1.21
Key: FLINK-13656
URL: https://issues.apache.org/jira/browse/FLINK-13656
Project: Flink
Issue Type: Improvement
Congratulations Hequn!
Best,
Congxian
Yu Li 于2019年8月8日周四 下午2:02写道:
> Congratulations Hequn! Well deserved!
>
> Best Regards,
> Yu
>
>
> On Thu, 8 Aug 2019 at 03:53, Haibo Sun wrote:
>
>> Congratulations!
>>
>> Best,
>> Haibo
>>
>> At 2019-08-08 02:08:21, "Yun Tang" wrote:
>>
Hi Kurt,
I posted my opinion around this particular example in FLINK-13225.
Regarding the definition of "feature freeze": I think it is good to
write down more of the implicit processes that we had in the past. The
bylaws, coding guidelines, and a better FLIP process are very good steps
Zhenghua Gao created FLINK-13651:
Summary: table api not support cast to decimal with precision and
scale
Key: FLINK-13651
URL: https://issues.apache.org/jira/browse/FLINK-13651
Project: Flink
Hi Till,
as Fabian said, we considered the option you mentioned, but in the end
decided that not maintaining a separate images has more advantages.
In the context of FLIP-42 we are also revisiting the examples in general
and want to clean these up a bit. So, for what it's worth, there will be an
Just as a short addendum, there are also benefits of having the
ClickEventCount job not being part of the Flink repository. Assume there is
a bug in the job, then you would have to wait for the next Flink release to
fix it.
On Thu, Aug 8, 2019 at 2:24 PM Till Rohrmann wrote:
> I see that
Hi Till,
we will try to find another way to make the playground available for users
soon. The discussion of and how to split up the Flink Repository started
only after we discussed the playground and flink-playgrounds repositories.
I think, this is the reason we went this way, not necessarily
Hi all,
I understand the merged PR is a feature, but it's something we had planned
and requested for a long time. In fact, at Hive connector side, we have
done a lot of work (supporting hive udf). Without this PR, all those work
would be wasted and Hive feature itself in 1.9 would also be close
Chesnay Schepler created FLINK-13652:
Summary: Setup instructions for creating an ARM environment
Key: FLINK-13652
URL: https://issues.apache.org/jira/browse/FLINK-13652
Project: Flink
Hi,
Thanks for proposing and writing this down Chesney.
Generally speaking +1 from my side for the idea. It will create additional pain
for cross repository development, like some new feature in connectors that need
some change in the main repository. I’ve worked in such setup before and the
I see that keeping the playground job in the Flink repository has a couple
of advantages, among other things that it's easier to keep up to date.
However, in particular in the light of the potential repository split where
we want to separate connectors from Flink core, it seems very problematic
to
49 matches
Mail list logo