t; and "fat" solution. One comment about the fat one, I think
we
need to
put all needed jars into /lib (or /plugins). Put jars into /opt and
relying
on users moving
them from /opt to /lib doesn't really improve the out-of-box experience.
Best,
Kurt
On Fri, Apr 24, 2020 at 8:28 PM A
Unfortunately, I found this bug which prevents the TaskCancelerWatchdog
[sic] from working: https://issues.apache.org/jira/browse/FLINK-17514. I
think it's quite crucial that this failsafe mechanism works correctly.
We should cancel the release and fix it.
Best,
Aljoscha
On 05.05.20 05:55,
Aljoscha Krettek created FLINK-17514:
Summary: TaskCancelerWatchdog does not kill TaskManager
Key: FLINK-17514
URL: https://issues.apache.org/jira/browse/FLINK-17514
Project: Flink
Issue
ub.com/senegalo/flink/pull/1 it's not that much of a
change
in
terms of logic but more of what is exposed.
Let me know how you want me to proceed.
Thanks again,
Karim Mansour
On Thu, Apr 30, 2020 at 10:40 AM Aljoscha Krettek mailto:aljos...@apache.org>>
wrote:
Hi,
I think it's good
I agree with Till and Xintong, if the ExternalResourceInfo is only a
holder of properties that doesn't have any sublasses it can just become
the "properties" itself.
Aljoscha
On 30.04.20 12:49, Till Rohrmann wrote:
Thanks for the clarification.
I think you are right that the typed approach
Hi,
I think it's good to contribute the changes to Flink directly since we
already have the RMQ connector in the respository.
I would propose something similar to the Kafka connector, which takes
both the generic DeserializationSchema and a KafkaDeserializationSchema
that is specific to
ward to update to the new unified watermark generators once
FLIP-126 has been accepted.
Regards,
Timo
[1] https://github.com/apache/flink/pull/11692
On 20.04.20 18:10, Aljoscha Krettek wrote:
Hi Everyone!
We would like to start a discussion on "FLIP-126: Unify (and separate)
Watermark
orget to
iterate the returned iterators, e.g. user submits a bunch of DDLs
and
expect the
framework will execute them one by one. But it didn't.
Best,
Kurt
On Wed, Apr 1, 2020 at 5:10 PM Aljoscha Krettek<
aljos...@apache.org>
wrote:
Agreed to what Dawid and Timo said.
To answ
Hi Niels,
I think Robert was referring to the fact that Apache considers only the
source release to be "the release", everything else is called
convenience release.
Best,
Aljoscha
On 27.04.20 19:43, Niels Basjes wrote:
Hi,
In my opinion the docker images are essentially simply differently
Aljoscha Krettek created FLINK-17415:
Summary: Fold API-agnostic documentation into DataStream
documentation (chinese)
Key: FLINK-17415
URL: https://issues.apache.org/jira/browse/FLINK-17415
Hi Manish,
welcome to the community! You could start from a user program example
and then try and figure out how that leads to job execution. So probably
start with the DataStream WordCount example, figure out what the methods
on DataStream do, that is how they build up a graph of
On 27.04.20 09:34, David Morávek wrote:
When we include `flatMap` in between rebalances ->
`.rebalance().flatMap(...).rebalance()`, we need to reshuffle again,
because dataset distribution may have changed (eg. you can possibli emit
unbouded stream from a single element). Unfortunatelly
ine there still being a desire to have a "minimal" docker
file, for users that want to keep the container images as small as
possible, to speed up deployment. It is fine if that would not be the
default, though.
On Fri, Apr 17, 2020 at 12:16 PM Aljoscha Krettek
wrote:
I think having suc
Definitely +1! I'm always game for decreasing the API surface if it
doesn't decrease functionality.
Aljoscha
On 23.04.20 14:18, DONG, Weike wrote:
Hi Stephan,
+1 for the removal, as there are so many deprecated methods scattered
around, making APIs a little bit messy and confusing.
Best,
+1
Aljoscha
On 23.04.20 15:23, Till Rohrmann wrote:
+1 for extending the feature freeze until May 15th.
Cheers,
Till
On Thu, Apr 23, 2020 at 1:00 PM Piotr Nowojski wrote:
Hi Stephan,
As release manager I’ve seen that quite a bit of features could use of the
extra couple of weeks. This
Aljoscha Krettek created FLINK-17349:
Summary: Reduce runtime of LocalExecutorITCase
Key: FLINK-17349
URL: https://issues.apache.org/jira/browse/FLINK-17349
Project: Flink
Issue Type
+1 to getting rid of flink-shaded-hadoop. But we need to document how
people can now get a Flink dist that works with Hadoop. Currently, when
you download the single shaded jar you immediately get support for
submitting to YARN via bin/flink run.
Aljoscha
On 22.04.20 09:08, Till Rohrmann
Hi Everyone!
We would like to start a discussion on "FLIP-126: Unify (and separate)
Watermark Assigners" [1]. This work was started by Stephan in an
experimental branch. I expanded on that work to provide a PoC for the
changes proposed in this FLIP: [2].
Currently, we have two different
ution that we release could
remain "slim" or we could even make it slimmer. I might be missing
something here though.
Best,
Dawdi
On 16/04/2020 11:02, Aljoscha Krettek wrote:
I want to reinforce my opinion from earlier: This is about improving
the situation both for first-time users
Aljoscha Krettek created FLINK-17217:
Summary: Download links for central.maven.org in doc don't work
Key: FLINK-17217
URL: https://issues.apache.org/jira/browse/FLINK-17217
Project: Flink
Hi,
first, excellent that you're driving this, Marta!
By now I have made quite some progress on the FLIP-42 restructuring so
that is not a good effort for someone to join now. Plus there is also
[1], which is about incorporating the existing Flink Training material
into the concepts section
+1 (binding)
Aljoscha
d flink-json should in the
distribution,
they are quite small and don't have other dependencies.
Best,
Jark
On Wed, 15 Apr 2020 at 15:44, Jeff Zhang
wrote:
Hi Aljoscha,
Big +1 for the fat flink distribution, where do you plan to put
these
connectors ? opt or lib ?
Aljoscha Krettek 于2020年4月15日周三
I'd be very happy if someone took over that part of the documentation!
There are open issues for the TODOs in the concepts section here:
https://issues.apache.org/jira/browse/FLINK-12639. But feel free to
comment there/close/re-arrange as you see fit. Maybe we use this thread
and Jira to
Is the only really new method on the public APIs
getExternalResourceInfos(..) on the RuntimeContext? I'm generally quite
skeptical about adding anything to that interface but the method seems ok.
Side note for the configuration keys: the pattern is similar to metrics
configuration. There we
Hi Everyone,
I'd like to discuss about releasing a more full-featured Flink
distribution. The motivation is that there is friction for SQL/Table API
users that want to use Table connectors which are not there in the
current Flink Distribution. For these users the workflow is currently
Aljoscha Krettek created FLINK-17136:
Summary: Rename toplevel DataSet/DataStream section titles
Key: FLINK-17136
URL: https://issues.apache.org/jira/browse/FLINK-17136
Project: Flink
On 10.04.20 17:35, Jark Wu wrote:
1) For correctness, it is necessary to perform the watermark generation as
early as possible in order to be close to the actual data
generation within a source's data partition. This is also the purpose of
per-partition watermark and event-time alignment.
Aljoscha Krettek created FLINK-17074:
Summary: Deprecate DataStream.keyBy() that use tuple/expression
keys
Key: FLINK-17074
URL: https://issues.apache.org/jira/browse/FLINK-17074
Project: Flink
That is very nice! Thanks for taking care of this ~3q
On 09.04.20 11:08, Dian Fu wrote:
Cool! Thanks Yun for this effort. Very useful feature.
Regards,
Dian
在 2020年4月9日,下午4:32,Yu Li 写道:
Great! Thanks for the efforts Yun.
Best Regards,
Yu
On Thu, 9 Apr 2020 at 16:15, Jark Wu wrote:
Hi Everyone,
we have a lot of commits recently that were committed by "GitHub
". This happens when your GitHub account is not
configured correctly with respect to your email address. Please make
sure that your commits somehow show who is the committer.
For reference, check out
On 08.04.20 04:27, Jark Wu wrote:
I have a minor concern about the global configuration
`table.optimizer.dynamic-table-options.enabled`, does it belong to
optimizer?
From my point of view, it is just an API to set table options and uses
Calcite in the implementation.
I'm also thinking about
On 07.04.20 08:45, Dawid Wysakowicz wrote:
@Jark I was aware of the implementation of SinkFunction, but it was a
conscious choice to not do it that way.
Personally I am against giving a default implementation to both the new
and old methods. This results in an interface that by default does
Aljoscha Krettek created FLINK-17009:
Summary: Fold API-agnostic documentation into DataStream
documentation
Key: FLINK-17009
URL: https://issues.apache.org/jira/browse/FLINK-17009
Project: Flink
n 02.04.20 16:24, Aljoscha Krettek wrote:
I think we're designing ourselves into ever more complicated corners
here. Maybe we need to take a step back and reconsider. What would you
think about this (somewhat) simpler proposal:
- introduce a hint called CONNECTOR_OPTIONS(k=v,...). or
CONNECTOR_
Aljoscha Krettek created FLINK-16976:
Summary: Update chinese documentation for ListCheckpointed
deprecation
Key: FLINK-16976
URL: https://issues.apache.org/jira/browse/FLINK-16976
Project: Flink
+1
Aljoscha
Hi All,
we're currently struggling a bit with test stability, it seems
especially on Azure. If you encounter a test failure in a PR or anywhere
else, please take the time to check if there is already a Jira issue or
create a new one. If there is already an Issue, please report the
additional
I think we're designing ourselves into ever more complicated corners
here. Maybe we need to take a step back and reconsider. What would you
think about this (somewhat) simpler proposal:
- introduce a hint called CONNECTOR_OPTIONS(k=v,...). or
CONNECTOR_PROPERTIES, depending on what naming we
+1 to making Blink the default planner, we definitely don't want to
maintain two planners for much longer.
Best,
Aljoscha
Agreed to what Dawid and Timo said.
To answer your question about multi line SQL: no, we don't think we need
this in Flink 1.11, we only wanted to make sure that the interfaces that
we now put in place will potentially allow this in the future.
Best,
Aljoscha
On 01.04.20 09:31, godfrey he
On 18.03.20 14:45, Flavio Pompermaier wrote:
what do you think if we exploit this job-submission sprint to address also
the problem discussed in https://issues.apache.org/jira/browse/FLINK-10862?
That's a good idea! What should we do? It seems that most committers on
the issue were in favour
Thanks for the update!
On 13.03.20 13:47, Rong Rong wrote:
1. I think we have finally pinpointed what the root cause to this issue is:
When partitions are assigned manually (e.g. with assign() API instead
subscribe() API) the client will not try to rediscover the coordinator if
it dies [1].
+1
Aljoscha
On 12.03.20 14:05, Flavio Pompermaier wrote:
There's also a related issue that I opened a long time ago
https://issues.apache.org/jira/browse/FLINK-10879 that could be closed once
implemented this FLIP (or closed immediately and referenced as duplicated
by the new JIRA ticket that would be
Aljoscha Krettek created FLINK-16572:
Summary: CheckPubSubEmulatorTest is flaky on Azure
Key: FLINK-16572
URL: https://issues.apache.org/jira/browse/FLINK-16572
Project: Flink
Issue Type
+1 (binding)
Aljoscha
On 09.03.20 06:10, Rong Rong wrote:
- Is this feature (disabling checkpoint and restarting job from Kafka
committed GROUP_OFFSET) not supported?
I believe the Flink community never put much (any?) effort into this
because the Flink Kafka Consumer does its own offset handling. Starting
from
Hi,
I don't understand this discussion. Hints, as I understand them, should
work like this:
- hints are *optional* advice for the optimizer to try and help it to
find a good execution strategy
- hints should not change query semantics, i.e. they should not change
connector properties
Thanks! I'm reading the document now and will get back to you.
Best,
Aljoscha
On 10.03.20 14:35, Robert Metzger wrote:
I'm wondering whether we should file a ticket to remove the *.bat files in
bin/ ?
We can leave them there because they're not doing much harm, and
removing them might actively break some existing setup.
Best,
Aljoscha
On 10.03.20 03:31, Yang Wang wrote:
For the "run-job", do you mean to submit a Flink job to an existing session
or
just like the current per-job to start a dedicated Flink cluster? Then will
"flink run" be deprecated?
I was talking about the per-job mode that starts a dedicated Flink
cluster.
Since there was no-one that said we should keep the windows scripts and
no-one responded on the user ML thread I'll close the Jira issues/PRs
about extending the scripts.
Aljoscha
On 09.03.20 03:15, tison wrote:
So far, there is a PR[1] that implements the proposal in this thread.
I look forward to your reviews or start a vote if required.
Nice, I'll try and get to review that this week.
Best,
Aljoscha
> For the -R flag, this was in the PoC that I published just as a quick
> implementation, so that I can move fast to the entrypoint part.
> Personally, I would not even be against having a separate command in
> the CLI for this, sth like run-on-cluster or something along those
> lines.
> What do
If there is a noreply email address that could be on purpose. This
happens when you configure github to not show your real e-mail address.
This also happens when contributors open a PR and don't want to show
their real e-mail address. I talked to at least 1 person that had it set
up like this
, ProgramInvocationException, we just throw in place as it
is accessible.
- transitively, flink-optimizer, for one utility.
- transitively, flink-java, for several utilities.
flink-runtime:
- mainly for JobGraph generating.
With a previous discussion with @Aljoscha Krettek our
goal is briefly making
Aljoscha Krettek created FLINK-16216:
Summary: Describe end-to-end exactly once programs in stateful
stream processing concepts documentation
Key: FLINK-16216
URL: https://issues.apache.org/jira/browse/FLINK
Aljoscha Krettek created FLINK-16214:
Summary: Describe how state is different for stream/batch programs
in concepts documentation
Key: FLINK-16214
URL: https://issues.apache.org/jira/browse/FLINK-16214
Aljoscha Krettek created FLINK-16213:
Summary: Add "What Is State" section in concepts documentation
Key: FLINK-16213
URL: https://issues.apache.org/jira/browse/FLINK-16213
Proj
Aljoscha Krettek created FLINK-16212:
Summary: Describe how Flink is a unified batch/stream processing
system in concepts documentation
Key: FLINK-16212
URL: https://issues.apache.org/jira/browse/FLINK-16212
Aljoscha Krettek created FLINK-16211:
Summary: Add introduction to stream processing concepts
documentation
Key: FLINK-16211
URL: https://issues.apache.org/jira/browse/FLINK-16211
Project: Flink
Aljoscha Krettek created FLINK-16210:
Summary: Add section about applications and clusters/session in
concepts documentation
Key: FLINK-16210
URL: https://issues.apache.org/jira/browse/FLINK-16210
Aljoscha Krettek created FLINK-16209:
Summary: Add Latency and Completeness section in timely stream
processing concepts
Key: FLINK-16209
URL: https://issues.apache.org/jira/browse/FLINK-16209
Aljoscha Krettek created FLINK-16208:
Summary: Add introduction to timely stream processing concepts
documentation
Key: FLINK-16208
URL: https://issues.apache.org/jira/browse/FLINK-16208
Project
Aljoscha Krettek created FLINK-16207:
Summary: In stream processing concepts section, rework
distribution patterns description
Key: FLINK-16207
URL: https://issues.apache.org/jira/browse/FLINK-16207
Hi,
the background is this series of Jira Issues and PRs around extending
the .bat scripts for windows:
https://issues.apache.org/jira/browse/FLINK-5333.
I would like to resolve this, by either closing the Jira Issues as
"Won't Do" or finally merging these PRs. The questions I have are:
+1
Although I would hope that it can be more than just "anticipated".
Best,
Aljoscha
On 19.02.20 15:40, Till Rohrmann wrote:
Thanks for volunteering as one of our release managers Zhijiang.
+1 for the *anticipated feature freeze date* end of April. As we go along
and collect more data points
x
commits
included in a PR
$ git shortlog --grep 'hotfix' -s -n release-0.9.0..
94 Stephan Ewen
42 Aljoscha Krettek
20 Till Rohrmann
16 Robert Metzger
13 Ufuk Celebi
9 Fabian Hueske
9 Maximilian Michels
6 Greg Hogan
5 Stefano Baghino
Hi,
thanks for starting this discussion!
However, I have a somewhat opposing opinion to this: we should disallow
using Google Docs for FLIPs and FLIP discussions and follow the already
established process more strictly.
My reasons for this are:
- discussions on the Google Doc are not
Wouldn't removing the ES 2.x connector be enough because we can then
update the ES 5.x connector? It seems there are some users that still
want to use that one.
Best,
Aljoscha
On 18.02.20 10:42, Robert Metzger wrote:
The ES5 connector is causing some problems on the CI system. It would be
Aljoscha Krettek created FLINK-16144:
Summary: Add client.timeout setting and use that for CLI operations
Key: FLINK-16144
URL: https://issues.apache.org/jira/browse/FLINK-16144
Project: Flink
reat summary! Thanks for adding the translation specification in
it.
I learned a lot from the guide.
Best,
Jark
On Fri, 14 Feb 2020 at 23:39, Aljoscha Krettek <
aljos...@apache.org>
wrote:
Hi Everyone,
we just merged a new style guide for documentation writing:
https://flink.apache.org/co
Hi Everyone,
we just merged a new style guide for documentation writing:
https://flink.apache.org/contributing/docs-style.html.
Anyone who is writing documentation or is planning to do so should check
this out. Please open a Jira Issue or respond here if you have any
comments or questions.
Aljoscha Krettek created FLINK-16049:
Summary: Remove outdated "Best Pracatices" section from
Application Development Section
Key: FLINK-16049
URL: https://issues.apache.org/jira/browse/F
Hi,
what's the difference in approach to the mentioned related Jira Issue
([1])? I commented there because I'm skeptical about adding
Hadoop-specific code to the generic cluster components.
Best,
Aljoscha
[1] https://issues.apache.org/jira/browse/FLINK-14317
On 13.02.20 03:47, SHI Xiaogang
ook works on this map.
This is very fragile and depends on a lot of internals. Kind of like
exposing the JobGraph but much worse. I think we can do better.
Gyula
On Fri, Feb 7, 2020 at 9:55 AM Aljoscha Krettek
wrote:
If we need it, we can probably beef up the JobListener to allow
accessing s
Aljoscha Krettek created FLINK-16045:
Summary: Extract connectors documentation to a top-level section
Key: FLINK-16045
URL: https://issues.apache.org/jira/browse/FLINK-16045
Project: Flink
Aljoscha Krettek created FLINK-16044:
Summary: Extract libraries documentation to a top-level section
Key: FLINK-16044
URL: https://issues.apache.org/jira/browse/FLINK-16044
Project: Flink
Aljoscha Krettek created FLINK-16041:
Summary: Expand "popular" documentation sections by default
Key: FLINK-16041
URL: https://issues.apache.org/jira/browse/FLINK-16041
Proj
Aljoscha Krettek created FLINK-16000:
Summary: Move "Project Build Setup" to "Getting Started" in
documentation
Key: FLINK-16000
URL: https://issues.apache.org/jira/browse/FLINK-16000
+1
Best,
Aljoscha
On 11.02.20 11:17, Jingsong Li wrote:
Thanks Dawid for your explanation,
+1 for vote.
So I am big +1 to accepting java.lang.Object in the Java DSL, without
scala implicit conversion, a lot of "lit" look unfriendly to users.
Best,
Jingsong Lee
On Tue, Feb 11, 2020 at 6:07
Aljoscha Krettek created FLINK-15999:
Summary: Extract “Concepts” material from API/Library sections and
start proper concepts section
Key: FLINK-15999
URL: https://issues.apache.org/jira/browse/FLINK-15999
Aljoscha Krettek created FLINK-15998:
Summary: Revert rename of "Job Cluster" to "Application Cluster"
in documentation
Key: FLINK-15998
URL: https://issues.apache.org/jir
Aljoscha Krettek created FLINK-15997:
Summary: Make documentation 404 page look like a documentation page
Key: FLINK-15997
URL: https://issues.apache.org/jira/browse/FLINK-15997
Project: Flink
Aljoscha Krettek created FLINK-15993:
Summary: Add timeout to 404 documentation redirect, add explanation
Key: FLINK-15993
URL: https://issues.apache.org/jira/browse/FLINK-15993
Project: Flink
+1 for dropping them, this stuff is quite old by now.
On 10.02.20 15:04, Benchao Li wrote:
+1 for dropping 2.x - 5.x.
FYI currently only 6.x and 7.x ES Connectors are supported by table api.
Flavio Pompermaier 于2020年2月10日周一 下午10:03写道:
+1 for dropping all Elasticsearch connectors < 6.x
On
If we need it, we can probably beef up the JobListener to allow
accessing some information about the whole graph or sources and sinks.
My only concern right now is that we don't have a stable interface for
our job graphs/pipelines right now.
Best,
Aljoscha
On 06.02.20 23:00, Gyula Fóra
I would say a ML discussion or even a Jira issue is enough because
a) the methods are already deprecated
b) the methods are @PublicEvolving, which I don't consider a super
strong guarantee to users (we still shouldn't remove them lightly, but
we can if we have to...)
Best,
Aljoscha
On
already prepared the package),
personally I think it worths to do that.
What's your thought? :)
Best,
Jincheng
Aljoscha Krettek 于2020年2月4日周二 下午4:00写道:
Hi,
I think that's a good idea, but we will also soon have Flink 1.10 anyways.
Best,
Aljoscha
On 04.02.20 07:25, Hequn Cheng wrote:
Hi
Aljoscha Krettek created FLINK-15904:
Summary: Make Kafka Consumer work with activated
"disableGenericTypes()"
Key: FLINK-15904
URL: https://issues.apache.org/jira/browse/FLINK-15904
Hi,
I think that's a good idea, but we will also soon have Flink 1.10 anyways.
Best,
Aljoscha
On 04.02.20 07:25, Hequn Cheng wrote:
Hi Jincheng,
+1 for this proposal.
From the perspective of users, I think it would nice to have PyFlink on
PyPI which makes it much easier to install PyFlink.
+1 (binding)
- I verified the checksum
- I verified the signatures
- I eyeballed the diff in pom files between 1.9 and 1.10 and checked any
newly added dependencies. They are ok. If the 1.9 licensing was correct
the licensing on this should also be correct
- I manually installed Flink Python
specifying a savepoint on
the CLI. Passing multiple savepoints to individual environments would be
necessary but I don't know what would be a good solution.
To me, it feels like multi-execute() only makes sense for batch programs.
Best,
Aljoscha
On 23.01.20 17:03, Aljoscha Krettek wrote:
Hi,
I'm
quot;execute-style" of writing jobs does
not work well for streaming programs and we might have to re-introduce
an interface like
interface FlinkJob {
Pipeline getPipeline();
}
for streaming scenarios.
Kostas:
I have to think about the whole issue more, but definitely an interface
like the
+1
I approve this FLIP.
Best,
Aljoscha
On 20.01.20 15:24, Piotr Nowojski wrote:
Thank you all for the votes.
So far we have received 7 approving votes, 2 of which are binding and there is
no -1 votes:
* Nico (binding)
* Zhijiang (binding)
* Zhenghua (non-binding)
* Yun (non-binding)
* Haibo
Aljoscha Krettek created FLINK-15735:
Summary: Too many warnings when running bin/start-cluster.sh
Key: FLINK-15735
URL: https://issues.apache.org/jira/browse/FLINK-15735
Project: Flink
this is by design, @Aljoscha
Krettek<mailto:aljos...@apache.org> would you please share the initial idea
when introducing this for the first time?
[1]
https://ci.apache.org/projects/flink/flink-docs-stable/dev/stream/operators/windows.html#reducefunction
Best
Yu
Hi,
As I said in the discussion on the Jira issue, I’m in favour of this change!
This is the Jira Issue, for reference:
https://issues.apache.org/jira/browse/FLINK-15424
Best,
Aljoscha
> On 8. Jan 2020, at 15:16, Congxian Qiu wrote:
>
> Dear All
>
>
> Currently, we found the
Aljoscha Krettek created FLINK-15518:
Summary: Don't hide web frontend side pane automatically
Key: FLINK-15518
URL: https://issues.apache.org/jira/browse/FLINK-15518
Project: Flink
201 - 300 of 1475 matches
Mail list logo