For some reference, this is a Jira issue that was created by the OP
about using Scalafmt for the Flink code base:
https://issues.apache.org/jira/browse/FLINK-19159.
Best,
Aljoscha
I thought about this some more. One of the important parts of the
Iceberg sink is to know whether we have already committed some
DataFiles. Currently, this is implemented by writing a (JobId,
MaxCheckpointId) tuple to the Iceberg table when committing. When
restoring from a failure we check
On 14.09.20 01:23, Steven Wu wrote:
## Writer interface
For the Writer interface, should we add "*prepareSnapshot"* before the
checkpoint barrier emitted downstream? IcebergWriter would need it. Or
would the framework call "*flush*" before the barrier emitted downstream?
that guarantee would
Hi all,
After the discussion in [1], I would like to open a voting thread for
FLIP-134 (https://s.apache.org/FLIP-134) [2] which discusses a new BATCH
execution mode for the DataStream API.
The vote will be open until September 17, unless there is an objection
or not enough votes.
Congratulations!
Aljoscha
On 14.09.20 10:37, Robert Metzger wrote:
Hi all,
On behalf of the PMC, I’m very happy to announce Niels Basjes as a new
Flink committer.
Niels has been an active community member since the early days of Flink,
with 19 commits dating back until 2015.
Besides his
, that makes sense but I will consider
out of the scope of this FLIP. I want to focus on simplifying APIs.
@Aljoscha Krettek
My feeling is that state backends and checkpointing are going to be
integral to Flink for many years, regardless or other enhancements so this
change is still valuable.
Since
Thanks for the thoughtful comments! I'll try and address them inline
below. I'm hoping to start a VOTE thread soon if there are no other
comments by the end of today.
On 10.09.20 15:40, David Anderson wrote:
Having just re-read FLIP-134, I think it mostly makes sense, though I'm not
exactly
2 we would depend on `flink-streaming-java` and even
`flink-runtime`. For the new source API (FLIP-27) we managed to keep the
dependencies slim and the code is in flink-core. I'd be very happy if we
can manage the same for the new sink API.
Best,
Aljoscha
On 11.09.20 12:02, Aljoscha Krettek
Hi Everyone,
thanks to Guowei for publishing the FLIP, and thanks Steven for the very
thoughtful email!
We thought a lot internally about some of the questions you posted but
left a lot (almost all) of the implementation details out of the FLIP
for now because we wanted to focus on
Yes! I would be in favour of this since it's blocking us from upgrading
certain dependencies.
I would also be in favour of dropping Scala completely but that's a
different story.
Aljoscha
On 10.09.20 16:51, Seth Wiesman wrote:
Hi Everyone,
Think of this as a pre-flip, but what does
+1 (binding)
Aljoscha
Aljoscha Krettek created FLINK-19193:
Summary: Upgrade migration guidelines to use stop-with-savepoint
Key: FLINK-19193
URL: https://issues.apache.org/jira/browse/FLINK-19193
Project: Flink
+1 (binding)
Aljoscha
On 10.09.20 13:57, Timo Walther wrote:
Hi all,
after the discussion in [1], I would like to open a voting thread for
FLIP-107 [2] which discusses how to handle data next to the main payload
(i.e. key and value formats as well as metadata in general) in SQL
connectors
I've only been watching this from the sidelines but that latest proposal
looks very good to me!
Aljoscha
On 10.09.20 12:20, Kurt Young wrote:
The new syntax looks good to me.
Best,
Kurt
On Thu, Sep 10, 2020 at 5:57 PM Jark Wu wrote:
Hi Timo,
I have one minor suggestion.
Maybe the
On 10.09.20 11:30, Dawid Wysakowicz wrote:
I am not sure about the option for ignoring the Triggers. Do you mean to
ignore all the Triggers including e.g. Flink's such as CountTrigger,
EventTimeTrigger etc.? Won't it effectively disable the WindowOperator
whatsoever. Or even worse make it
On 10.09.20 09:00, Xuannan Su wrote:
How do you imagine that? Where do you distinguish between per-job and
session mode?
The StreamExecutionEnvironment can distinguish between per-job and session mode
by the type of the PipelineExecutor, i.e, AbstractJobClusterExecutor vs
Hi Devs,
@Users: I'm cc'ing the user ML to see if there are any users that are
relying on this feature. Please comment here if that is the case.
I'd like to discuss the deprecation and eventual removal of UnionList
Operator State, aka Operator State with Union Redistribution. If you
don't
I like it a lot!
I think it makes sense to clean this up despite the planned new
fault-tolerance mechanisms. In the future, users will decide which
mechanism to use and I can imagine that a lot of them will keep using
the current mechanism for quite a while to come. But I'm happy to yield
to
ually reach the time of pending timers? Or did you have something
else in mind with this?
Yes, that's what I meant. I actually took the options from this
issue[1], where there is some discussion on that topic as well.
Best,
Dawid
[1] https://issues.apache.org/jira/browse/FLINK-18647
On 08/09/2020 10:
I'm hereby cancelling this vote. There was more discussion on the
[DISCUSS] thread for FLIP-134.
Aljoscha
On 24.08.20 11:33, Kostas Kloudas wrote:
Hi all,
After the discussion in [1], I would like to open a voting thread for
FLIP-134 [2] which discusses the semantics that the DataStream API
ant to get rid of TypeComparator, I think we still need to find a
way to introduce this back.
Best,
Kurt
On Tue, Sep 8, 2020 at 3:04 AM Aljoscha Krettek
wrote:
Yes, I think we can address the problem of indeterminacy in a separate
FLIP because we're already in it.
Aljoscha
On 07.09.20 17:00,
+1
We just need to make sure to find a good name before the release but
shouldn't block any work on this.
Aljoscha
On 08.09.20 07:59, Xintong Song wrote:
Thanks for the vote, @Jincheng.
Concerning the namings, the original idea was, as you suggested, to have
separate configuration names
appreciate if you comment
before that, if you still have some outstanding ideas.
Best,
Dawid
On 04/09/2020 17:13, Aljoscha Krettek wrote:
Seth is right, I was just about to write that as well. There is a
problem, though, because some of our TypeSerializers are not
deterministic even though we use
Aljoscha Krettek created FLINK-19155:
Summary: ResultPartitionTest is unstable
Key: FLINK-19155
URL: https://issues.apache.org/jira/browse/FLINK-19155
Project: Flink
Issue Type: New
Aljoscha Krettek created FLINK-19153:
Summary: FLIP-131: Consolidate the user-facing Dataflow SDKs/APIs
(and deprecate the DataSet API)
Key: FLINK-19153
URL: https://issues.apache.org/jira/browse/FLINK-19153
Hi all,
The voting time for FLIP-131 [1] has passed. I'm closing the vote now.
Including my implicit vote, there were 7 + 1 votes, 5 of which are binding:
- Dawid Wysakowicz (binding)
- Piotr Nowojski (binding)
- David Anderson (binding)
- Zhu Zhu (binding)
- Aljoscha Krettek
There were no -1
Aljoscha Krettek created FLINK-19152:
Summary: Remove Kafka 0.10.x and 0.11.x connectors
Key: FLINK-19152
URL: https://issues.apache.org/jira/browse/FLINK-19152
Project: Flink
Issue Type
I like the proposal! I didn't check the implementation section in detail
but the SQL DDL examples look good as well as the options for specifying
how fields are mapped to keys/values look good.
Aljoscha
On 04.09.20 11:47, Dawid Wysakowicz wrote:
Hi Timo,
Thank you very much for the update.
Thanks for publishing the FLIP!
On 2020/09/01 06:49:06, Dawid Wysakowicz wrote:
> 1. How to sort/group keys? What representation of the key should we
> use? Should we sort on the binary form or should we depend on
> Comparators being available.
Initially, I suggested to Dawid (in
Aljoscha Krettek created FLINK-19135:
Summary: (Stream)ExecutionEnvironment.execute() should not throw
ExecutionException
Key: FLINK-19135
URL: https://issues.apache.org/jira/browse/FLINK-19135
Hi all,
After the discussion in [1], I would like to open a voting thread for
FLIP-131 (https://s.apache.org/FLIP-131) [2] which discusses the
deprecation of the DataSet API and future work on the DataStream API and
Table API for bounded (batch) execution.
The vote will be open until
Aljoscha Krettek created FLINK-19123:
Summary: TestStreamEnvironment does not use shared MiniCluster for
executeAsync()
Key: FLINK-19123
URL: https://issues.apache.org/jira/browse/FLINK-19123
Hi,
playing devils advocate here: should we even make the memory weights
configurable? We could go with weights that should make sense for most
cases in the first version and only introduce configurable weights when
(if) users need them.
Regarding where/how things are configured, I think
deprecating some of the relational-like
methods, which we should rather redirect to the Table API. I added a
section about it to the FLIP (mostly copying over your message). Let me
know what you think about it.
Best,
Dawid
On 25/08/2020 11:39, Aljoscha Krettek wrote:
Thanks for creating this FLIP! I think
, and change the source
operator id to discard the old connector states.
3. Start the job with the savepoint, and read Kafka from group offsets.
Best,
Paul Lam
2020年8月27日 16:27,Aljoscha Krettek 写道:
@Konstantin: Yes, I'm talking about dropping those modules. We don't have any special
code
+1
Aljoscha
On 28.08.20 09:41, Dawid Wysakowicz wrote:
Hi all,
I would like to start a vote for removing deprecated, but
Public(Evolving) methods in the upcoming 1.12 release:
* XxxDataStream#fold and all related classes (such as
FoldingDescriptor, FoldFunction, ...)
*
Did you consider DataStream.project() yet? In general I think we should
remove most of the relational-ish methods from DataStream. More
candidates in this set of methods would be the tuple index/expression
methods for aggregations like min/max etc...
Aljoscha
On 25.08.20 20:52, Konstantin
remember correctly, the universal connector is compatible
with 0.10 brokers, but I want to double check that.
Best,
Paul Lam
2020年8月24日 22:46,Aljoscha Krettek mailto:aljos...@apache.org>> 写道:
Hi all,
this thought came up on FLINK-17260 [1] but I think it would be a
good i
On 30.07.20 17:36, Aljoscha Krettek wrote:
I see, we actually have some thoughts along that line as well. We have
ideas about adding such functionality for `Transformation`, which is the
graph structure that underlies both the DataStream API and the newer
Table API Runner/Planner.
There a very rough
Hi all,
this thought came up on FLINK-17260 [1] but I think it would be a good
idea in general. The issue reminded us that Kafka didn't have an
idempotent/fault-tolerant Producer before Kafka 0.11.0. By now we have
had the "modern" Kafka connector that roughly follows new Kafka releases
for
Thanks for the pointer!
On 03.08.20 10:29, Till Rohrmann wrote:
Hi Xia Rui,
thanks for reporting this issue. I think FLINK-15116 [1] caused the
regression. The problem is indeed that we no longer set the
lastJobExecutionResult when using the ContextEnvironment. The problem has
not surfaced
ional unit, just to make the code more modular..
It's very comfortable indeed.
On Thu, Jul 30, 2020 at 5:20 PM Aljoscha Krettek
wrote:
That is good input! I was not aware that anyone was actually using
`runCustomOperation()`. Out of curiosity, what are you using that for?
We have definitely tho
arting the discussion. I am in favor of
unifying the APIs the way described in the FLIP and deprecating the
DataSet
API. I am looking forward to the detailed discussion of the changes
necessary.
Best,
Marton
On Wed, Jul 29, 2020 at 12:46 PM Aljoscha Krettek <
aljos...@apache.org>
wrote:
Hi Everyone,
my colleagues (in cc) and I would like to propose this FLIP for
discussion. In short, we want to reduce the number of APIs that we have
by deprecating the DataSet API. This is a big step for Flink, that's why
I'm also cross-posting this to the User Mailing List.
FLIP-131:
+1 (binding)
Aljoscha
On 28.07.20 04:12, Dian Fu wrote:
Thanks for driving this Shuiqiang.
+1
Regards,
Dian
在 2020年7月27日,下午3:33,jincheng sun 写道:
+1(binding)
Best,
Jincheng
Shuiqiang Chen 于2020年7月24日周五 下午8:32写道:
Hi everyone,
I would like to start the vote for FLIP-130[1], which is
implementation.
And thank you all for joining in the discussion. It seems that we have
reached a consensus. I will start a vote for this FLIP later today.
Best,
Shuiqiang
Hequn Cheng 于2020年7月24日周五 下午5:29写道:
Thanks a lot for your valuable feedback and suggestions! @Aljoscha
Krettek
+1 to the vote
I'm jumping in quite late but I think overall this is a very good effort
and it's in very good shape now.
Best,
Aljoscha
On 24.07.20 10:24, Jark Wu wrote:
Thanks Dawid,
Regarding (3), I think you mentioned it will affect other behavior, e.g.
listing tables, is a strong reason.
I will at
ve updated
the example section of the FLIP to reflect the design.)
Highly appreciated for your suggestions again. Looking forward to your
feedback.
Best,
Shuiqiang
Aljoscha Krettek 于2020年7月15日周三 下午5:58写道:
Hi,
thanks for the proposal! I have some comments about the API. We should
not
That is a very good observation!
In an ideal world, I would say we disallow #remove() because we cannot
efficiently implement it for RocksDB and we should keep the behaviour
consistent between the backends. Now that we already have the
functionality for the heap-based backends I think we
Aljoscha Krettek created FLINK-18693:
Summary: AvroSerializationSchema does not work with types
generated by avrohugger
Key: FLINK-18693
URL: https://issues.apache.org/jira/browse/FLINK-18693
Aljoscha Krettek created FLINK-18692:
Summary: AvroSerializationSchema does not work with types
generated by avrohugger
Key: FLINK-18692
URL: https://issues.apache.org/jira/browse/FLINK-18692
Yes, thanks Andrey! That's a good reminder for everyone. :-)
On 20.07.20 16:02, Andrey Zagrebin wrote:
Hi Flink Devs,
I would like to remind you that we have a 'starter' label [1] to annotate
Jira issues which need a contribution and which are not very
complicated for the new contributors. The
luding its parameters and types) in order to enable better UIs, then
> the important thing is to make things consistent and aligned with the new
> client developments and exploit this new dev sprint to fix such issues.
>
> On Mon, Mar 30, 2020 at 11:38 AM Aljoscha Krettek
> wrote:
>
Hi,
thanks for the proposal! I have some comments about the API. We should not
blindly copy the existing Java DataSteam because we made some mistakes with
that and we now have a chance to fix them and not forward them to a new API.
I don't think we need SingleOutputStreamOperator, in the Scala
Hi,
this was actually my mistake back then. :-O
I'm open to removing the generic parameter from Context if we are sure that it
won't break user code. I think it doesn't, because you cannot actually use it
with the generic parameter, as you found. Also, I think custom sink
implementations in
Hi,
I like the proposal! I remember that Beam also had more human-readable names
for the modules and found that helpful. Also, changing the names shouldn't
change anything for users because dependencies are referred to by
group/artifactId, it really just makes build output and IDE a bit more
Aljoscha Krettek created FLINK-18569:
Summary: Add Table.limit() which is the equivalent of SQL LIMIT
Key: FLINK-18569
URL: https://issues.apache.org/jira/browse/FLINK-18569
Project: Flink
+1
I'd also be in favour of releasing a 1.11.1 quickly
Aljoscha
On 09.07.20 13:57, Jark Wu wrote:
Hi Dian,
Glad to hear that you want to be the release manager of Flink 1.11.1.
I am very willing to help you with the final steps of the release process.
Best,
Jark
On Thu, 9 Jul 2020 at
+1
- verified hash of source release
- verified signature of source release
- source release compiles (with Scala 2.11)
- examples run without spurious log output (errors, exceptions)
I can confirm that log scrolling doesn't work on Firefox, though it
never has.
I would also feel better
Aljoscha Krettek created FLINK-18478:
Summary: AvroDeserializationSchema does not work with types
generated by avrohugger
Key: FLINK-18478
URL: https://issues.apache.org/jira/browse/FLINK-18478
+1
Aljoscha
On 01.07.20 15:14, Tzu-Li (Gordon) Tai wrote:
+1
On Wed, Jul 1, 2020, 8:57 PM Cranmer, Danny wrote:
Hi all,
I'd like to start a voting thread for FLIP-128 [1], which we've reached
consensus
in [2].
This voting will be open for minimum 3 days till 13:00 UTC, July 4th.
Wow, that is one thorough FLIP! I didn't fully go into all the technical
details but I think the general direction of this is good. If no one
objects I'd say we can proceed to voting and figure out the technical
details during implementation/review (if any remain unclear).
Best,
Aljoscha
On
Hi!
On 24.06.20 00:51, Thomas Weise wrote:
* -import org.apache.flink.table.api.java.StreamTableEnvironment;
+import org.apache.flink.table.api.bridge.java.StreamTableEnvironment;
This is very unfortunate yes, please see
much be @Public and forcing
us to keep things around for longer than we wanted,
but then we did a full 180 and stopped giving _any_ guarantees for all
new APIs.
It is about time we change this.
On 24/06/2020 09:15, Aljoscha Krettek wrote:
Yes, I would say the situation is different for minor vs
Konstantin
[]
http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-Stability-guarantees-for-PublicEvolving-classes-tp41459.html
On Tue, Jun 23, 2020 at 3:32 PM Aljoscha Krettek
wrote:
Hi,
this has come up a few times now and I think we need to discuss the
guarantees that we wan
Hi,
this has come up a few times now and I think we need to discuss the
guarantees that we want to officially give for this. What I mean by
cross-version compatibility is using, say, a Flink 1.10 Kafka connector
dependency/jar with Flink 1.11, or a Flink 1.10.0 connector with Flink
1.10.1.
,
Konstantin
On Mon 15. Jun 2020 at 12:56, Aljoscha Krettek
wrote:
Hi All,
This came to my mind because of the master/slave discussion in [1]
and
the larger discussions about inequality/civil rights happening right
now
in the world. I think for this reason alone we should use a name that
does
Aljoscha Krettek created FLINK-18381:
Summary: Update Jekyll to 4.0.1
Key: FLINK-18381
URL: https://issues.apache.org/jira/browse/FLINK-18381
Project: Flink
Issue Type: Bug
Aljoscha Krettek created FLINK-18377:
Summary: Rename "Flink Master" back to JobManager in documentation
Key: FLINK-18377
URL: https://issues.apache.org/jira/browse/FLINK-18377
Proj
Hi All,
This came to my mind because of the master/slave discussion in [1] and
the larger discussions about inequality/civil rights happening right now
in the world. I think for this reason alone we should use a name that
does not include "master".
We could rename it back to JobManager,
Hi,
is anyone actually using our .editorconfig file? IntelliJ has a plugin
for this that is actually quite powerful.
I managed to write a .editorconfig file that I quite like:
https://github.com/aljoscha/flink/commits/new-editorconfig. For me to
use that, we would either need to update our
Aljoscha Krettek created FLINK-18120:
Summary: Don't expand documentation sections by default
Key: FLINK-18120
URL: https://issues.apache.org/jira/browse/FLINK-18120
Project: Flink
Issue
Aljoscha Krettek created FLINK-18036:
Summary: Chinese documentation build is broken
Key: FLINK-18036
URL: https://issues.apache.org/jira/browse/FLINK-18036
Project: Flink
Issue Type
Aljoscha Krettek created FLINK-18032:
Summary: Remove outdated sections in migration guide
Key: FLINK-18032
URL: https://issues.apache.org/jira/browse/FLINK-18032
Project: Flink
Issue
+1
I'm in favour of backporting this because we otherwise would immediately
break the API between 1.11 and 1.12.
Best,
Aljoscha
On 26.05.20 17:05, Zhijiang wrote:
In the beginning, I have somehow similar concerns as Piotr mentioned below.
After some offline discussions, also as explained by
Aljoscha Krettek created FLINK-18011:
Summary: Make WatermarkStrategy/WatermarkStrategies more ergonomic
Key: FLINK-18011
URL: https://issues.apache.org/jira/browse/FLINK-18011
Project: Flink
Aljoscha Krettek created FLINK-17956:
Summary: Add Flink 1.11 MigrationVersion
Key: FLINK-17956
URL: https://issues.apache.org/jira/browse/FLINK-17956
Project: Flink
Issue Type: Task
e...
Best,
Dawid
On 12/05/2020 18:21, Aljoscha Krettek wrote:
Yes, I am also ok with a SerializableTimestampAssigner. This only
looks a bit clumsy in the API but as a user (that uses lambdas) you
should not see this. I pushed changes for this to my branch:
https://github.com/aljoscha/flink/tree
Aljoscha Krettek created FLINK-17886:
Summary: Update documentation for new
WatermarkGenerator/WatermarkStrategies
Key: FLINK-17886
URL: https://issues.apache.org/jira/browse/FLINK-17886
Project
+1 That's also how I think of the semantics of the fields.
Aljoscha
On 22.05.20 08:07, Robert Metzger wrote:
Hi all,
I have the feeling that the semantics of some of our JIRA fields (mostly
"affects versions", "fix versions" and resolve / close) are not defined in
the same way by all the core
Aljoscha Krettek created FLINK-17815:
Summary: Change KafkaConnector to give per-partition metric group
to WatermarkGenerator
Key: FLINK-17815
URL: https://issues.apache.org/jira/browse/FLINK-17815
about minor changes (names and serializability)
pending, but these should not conflict with the design here.
On Tue, May 12, 2020 at 10:01 AM Aljoscha Krettek
wrote:
Hi all,
I would like to start the vote for FLIP-126 [1], which is discussed and
reached a consensus in the discussion thread [2
Aljoscha Krettek created FLINK-17773:
Summary: Update documentation for new
WatermarkGenerator/WatermarkStrategies
Key: FLINK-17773
URL: https://issues.apache.org/jira/browse/FLINK-17773
Project
Aljoscha Krettek created FLINK-17766:
Summary: Use checkpoint lock instead of fine-grained locking in
Kafka AbstractFetcher
Key: FLINK-17766
URL: https://issues.apache.org/jira/browse/FLINK-17766
Aljoscha Krettek created FLINK-17669:
Summary: Use new WatermarkStrategy/WatermarkGenerator in Kafka
connector
Key: FLINK-17669
URL: https://issues.apache.org/jira/browse/FLINK-17669
Project
Aljoscha Krettek created FLINK-17661:
Summary: Add APIs for using new
WatermarkStrategy/WatermarkGenerator
Key: FLINK-17661
URL: https://issues.apache.org/jira/browse/FLINK-17661
Project: Flink
Aljoscha Krettek created FLINK-17659:
Summary: Add common watermark strategies and WatermarkStrategies
helper
Key: FLINK-17659
URL: https://issues.apache.org/jira/browse/FLINK-17659
Project
Aljoscha Krettek created FLINK-17658:
Summary: Add new TimestampAssigner and WatermarkGenerator
interfaces
Key: FLINK-17658
URL: https://issues.apache.org/jira/browse/FLINK-17658
Project: Flink
Aljoscha Krettek created FLINK-17655:
Summary: Remove old and long deprecated TimestampExtractor
Key: FLINK-17655
URL: https://issues.apache.org/jira/browse/FLINK-17655
Project: Flink
Aljoscha Krettek created FLINK-17654:
Summary: Move Clock classes to flink-core to make them usable
outside runtime
Key: FLINK-17654
URL: https://issues.apache.org/jira/browse/FLINK-17654
Project
Aljoscha Krettek created FLINK-17653:
Summary: FLIP-126: Unify (and separate) Watermark Assigners
Key: FLINK-17653
URL: https://issues.apache.org/jira/browse/FLINK-17653
Project: Flink
record
timestamp, in stream-record timestamp).
On Tue, May 12, 2020 at 4:12 PM Aljoscha Krettek
wrote:
Definitely +1 to point 2) raised by Dawid. I'm not sure on points 1) and
3).
1) I can see the benefit of that but in reality most timestamp assigners
will probably need to be Serializable. If y
a the factories) can extend this if needed.
I think that the factory approach supports code-generated extractors in
a
cleaner way even than an extractor with an open/init method.
On Mon, May 11, 2020 at 3:38 PM Aljoscha Krettek
wrote:
We're slightly running out of time. I would propose w
Hi all,
I would like to start the vote for FLIP-126 [1], which is discussed and reached
a consensus in the discussion thread [2].
The vote will be open until May 15th (72h), unless there is an objection or not
enough votes.
Best,
Aljoscha
[1]
Hi,
The problem is that the JobClient is talking to the wrong system. In YARN
per-job mode the cluster will only run as long as the job runs so there will be
no-one there to respond with the job status after the job is finished.
I think the solution is that the JobClient should talk to the
added connector.
I know that's a bit unorthodox but would everyone be OK with what's
currently there and then we iterate?
Best,
Aljoscha
On 11.05.20 13:57, Aljoscha Krettek wrote:
Ah, I meant to write this in my previous email, sorry about that.
The WatermarkStrategy, which is basically
29, 2020 at 7:24 PM Aljoscha Krettek
wrote:
Regarding the WatermarkGenerator (WG) interface itself. The proposal is
basically to turn emitting into a "flatMap", we give the
WatermarkGenerator a "collector" (the WatermarkOutput) and the WG can
decide whether to output a wate
Could you post the Jira issue here? I don't see it mentioned in this
thread so far.
Best,
Aljoscha
On 05.05.20 12:32, Roc Marshal wrote:
Hi,Aljoscha.I have updated the JIRA according to your suggestion. Thank you very
much.Best,Roc
At 2020-05-05 16:04:01, "Aljoscha Krettek"
a separate release because of
potential dependency conflicts for users who don't want to use SQL.
Cheers,
Till
On Tue, May 5, 2020 at 10:01 AM Aljoscha Krettek
wrote:
Thanks Till for summarizing!
Another alternative is also to stick to one distribution but remove one
of the very heavy filesystem
Hi,
image attachments don't work on this ML. You will have to upload the
image somewhere and post a link.
Best,
Aljoscha
On 02.05.20 09:16, Jeff Zhang wrote:
Hi Roc,
You can try flink on zeppelin, where you can submit flink job to yarn
directly without starting flink cluster by yourself.
101 - 200 of 1475 matches
Mail list logo