however
> >>>>>>>>>>>> the customers who specifically asked for this were mainly in
> >>>>>>>>>>>> the banking
> >>>>>>>>>>>> and telco sector.
> >>>>>>>>>>>>
> >>>>>>>>>>>> BR,
> >>>>>>>>>>>> G
> >>>>>>>>>>>>
> >>>>>>>>>>>>
> >>>>>>>>>>>> On Thu, Jun 3, 2021 at 9:20 AM Till Rohrmann <
> >>>>>>>>>>>> trohrm...@apache.org> wrote:
> >>>>>>>>>>>>
> >>>>>>>>>>>> > Thanks for updating the document Márton. Why is it that
> banks
> >>>>>>>>>>>> will
> >>>>>>>>>>>> > consider it more secure if Flink comes with Kerberos
> >>>>>>>>>>>> authentication
> >>>>>>>>>>>> > (assuming a properly secured setup)? I mean if an attacker
> >>>>>>>>>>>> can get access
> >>>>>>>>>>>> > to one of the machines, then it should also be possible to
> >>>>>>>>>>>> obtain the right
> >>>>>>>>>>>> > Kerberos token.
> >>>>>>>>>>>> >
> >>>>>>>>>>>> > I am not an authentication expert and that's why I wanted to
> >>>>>>>>>>>> ask what are
> >>>>>>>>>>>> > other authentication protocols other than Kerberos? Why did
> >>>>>>>>>>>> we select
> >>>>>>>>>>>> > Kerberos and not any other authentication protocol? Maybe
> you
> >>>>>>>>>>>> can list the
> >>>>>>>>>>>> > pros and cons for the different protocols. Is Kerberos also
> >>>>>>>>>>>> the standard
> >>>>>>>>>>>> > authentication protocol for Kubernetes deployments? If not,
> >>>>>>>>>>>> what would be
> >>>>>>>>>>>> > the answer when deploying on K8s?
> >>>>>>>>>>>> >
> >>>>>>>>>>>> > Cheers,
> >>>>>>>>>>>> > Till
> >>>>>>>>>>>> >
> >>>>>>>>>>>> > On Wed, Jun 2, 2021 at 12:07 PM Gabor Somogyi <
> >>>>>>>>>>>> gabor.g.somo...@gmail.com>
> >>>>>>>>>>>> > wrote:
> >>>>>>>>>>>> >
> >>>>>>>>>>>> >> Hi team,
> >>>>>>>>>>>> >>
> >>>>>>>>>>>> >> Happy to be here and hope I can provide quality additions
> in
> >>>>>>>>>>>> the future.
> >>>>>>>>>>>> >>
> >>>>>>>>>>>> >> Thank you all for helpful the suggestions!
> >>>>>>>>>>>> >> Considering them the FLIP has been modified and the work
> >>>>>>>>>>>> continues on the
> >>>>>>>>>>>> >> already existing Jira.
> >>>>>>>>>>>> >>
> >>>>>>>>>>>> >> BR,
> >>>>>>>>>>>> >> G
> >>>>>>>>>>>> >>
> >>>>>>>>>>>> >>
> >>>>>>>>>>>> >> On Wed, Jun 2, 2021 at 11:23 AM Márton Balassi <
> >>>>>>>>>>>> balassi.mar...@gmail.com>
> >>>>>>>>>>>> >> wrote:
> >>>>>>>>>>>> >>
> >>>>>>>>>>>> >>> Thanks, Chesney - I totally missed that. Answered on the
> >>>>>>>>>>>> ticket too, let
> >>>>>>>>>>>> >>> us continue there then.
> >>>>>>>>>>>> >>>
> >>>>>>>>>>>> >>> Till, I agree that we should keep this codepath as slim as
> >>>>>>>>>>>> possible. It
> >>>>>>>>>>>> >>> is an important design decision that we aim to keep the
> >>>>>>>>>>>> list of
> >>>>>>>>>>>> >>> authentication protocols to a minimum. We believe that
> this
> >>>>>>>>>>>> should not be a
> >>>>>>>>>>>> >>> primary concern of Flink and a trusted proxy service (for
> >>>>>>>>>>>> example Apache
> >>>>>>>>>>>> >>> Knox) should be used to enable a multitude of enduser
> >>>>>>>>>>>> authentication
> >>>>>>>>>>>> >>> mechanisms. The bare minimum of authentication mechanisms
> >>>>>>>>>>>> to support
> >>>>>>>>>>>> >>> consequently consist of a single strong authentication
> >>>>>>>>>>>> protocol for which
> >>>>>>>>>>>> >>> Kerberos is the enterprise solution and HTTP Basic primary
> >>>>>>>>>>>> for development
> >>>>>>>>>>>> >>> and light-weight scenarios.
> >>>>>>>>>>>> >>>
> >>>>>>>>>>>> >>> Added the above wording to G's doc.
> >>>>>>>>>>>> >>>
> >>>>>>>>>>>> >>>
> >>>>>>>>>>>>
> https://docs.google.com/document/d/1NMPeJ9H0G49TGy3AzTVVJVKmYC0okwOtqLTSPnGqzHw/edit
> >>>>>>>>>>>> >>>
> >>>>>>>>>>>> >>>
> >>>>>>>>>>>> >>>
> >>>>>>>>>>>> >>> On Tue, Jun 1, 2021 at 11:47 AM Chesnay Schepler <
> >>>>>>>>>>>> ches...@apache.org>
> >>>>>>>>>>>> >>> wrote:
> >>>>>>>>>>>> >>>
> >>>>>>>>>>>> >>>> There's a related effort:
> >>>>>>>>>>>> >>>> https://issues.apache.org/jira/browse/FLINK-21108
> >>>>>>>>>>>> >>>>
> >>>>>>>>>>>> >>>> On 6/1/2021 10:14 AM, Till Rohrmann wrote:
> >>>>>>>>>>>> >>>> > Hi Gabor, welcome to the Flink community!
> >>>>>>>>>>>> >>>> >
> >>>>>>>>>>>> >>>> > Thanks for sharing this proposal with the community
> >>>>>>>>>>>> Márton. In
> >>>>>>>>>>>> >>>> general, I
> >>>>>>>>>>>> >>>> > agree that authentication is missing and that this is
> >>>>>>>>>>>> required for
> >>>>>>>>>>>> >>>> using
> >>>>>>>>>>>> >>>> > Flink within an enterprise. The thing I am wondering is
> >>>>>>>>>>>> whether this
> >>>>>>>>>>>> >>>> > feature strictly needs to be implemented inside of
> Flink
> >>>>>>>>>>>> or whether a
> >>>>>>>>>>>> >>>> proxy
> >>>>>>>>>>>> >>>> > setup could do the job? Have you considered this
> option?
> >>>>>>>>>>>> If yes, then
> >>>>>>>>>>>> >>>> it
> >>>>>>>>>>>> >>>> > would be good to list it under the point of rejected
> >>>>>>>>>>>> alternatives.
> >>>>>>>>>>>> >>>> >
> >>>>>>>>>>>> >>>> > I do see the benefit of implementing this feature
> inside
> >>>>>>>>>>>> of Flink if
> >>>>>>>>>>>> >>>> many
> >>>>>>>>>>>> >>>> > users need it. If not, then it might be easier for the
> >>>>>>>>>>>> project to not
> >>>>>>>>>>>> >>>> > increase the surface area since it makes the overall
> >>>>>>>>>>>> maintenance
> >>>>>>>>>>>> >>>> harder.
> >>>>>>>>>>>> >>>> >
> >>>>>>>>>>>> >>>> > Cheers,
> >>>>>>>>>>>> >>>> > Till
> >>>>>>>>>>>> >>>> >
> >>>>>>>>>>>> >>>> > On Mon, May 31, 2021 at 4:57 PM Márton Balassi <
> >>>>>>>>>>>> mbala...@apache.org>
> >>>>>>>>>>>> >>>> wrote:
> >>>>>>>>>>>> >>>> >
> >>>>>>>>>>>> >>>> >> Hi team,
> >>>>>>>>>>>> >>>> >>
> >>>>>>>>>>>> >>>> >> Firstly I would like to introduce Gabor or G [1] for
> >>>>>>>>>>>> short to the
> >>>>>>>>>>>> >>>> >> community, he is a Spark committer who has recently
> >>>>>>>>>>>> transitioned to
> >>>>>>>>>>>> >>>> the
> >>>>>>>>>>>> >>>> >> Flink Engineering team at Cloudera and is looking
> >>>>>>>>>>>> forward to
> >>>>>>>>>>>> >>>> contributing
> >>>>>>>>>>>> >>>> >> to Apache Flink. Previously G primarily focused on
> >>>>>>>>>>>> Spark Streaming
> >>>>>>>>>>>> >>>> and
> >>>>>>>>>>>> >>>> >> security.
> >>>>>>>>>>>> >>>> >>
> >>>>>>>>>>>> >>>> >> Based on requests from our customers G has implemented
> >>>>>>>>>>>> Kerberos and
> >>>>>>>>>>>> >>>> HTTP
> >>>>>>>>>>>> >>>> >> Basic Authentication for the Flink Dashboard and
> >>>>>>>>>>>> HistoryServer.
> >>>>>>>>>>>> >>>> Previously
> >>>>>>>>>>>> >>>> >> lacked an authentication story.
> >>>>>>>>>>>> >>>> >>
> >>>>>>>>>>>> >>>> >> We are looking to contribute this functionality back
> to
> >>>>>>>>>>>> the
> >>>>>>>>>>>> >>>> community, we
> >>>>>>>>>>>> >>>> >> believe that given Flink's maturity there should be a
> >>>>>>>>>>>> common code
> >>>>>>>>>>>> >>>> solution
> >>>>>>>>>>>> >>>> >> for this general pattern.
> >>>>>>>>>>>> >>>> >>
> >>>>>>>>>>>> >>>> >> We are looking forward to your feedback on G's design.
> >>>>>>>>>>>> [2]
> >>>>>>>>>>>> >>>> >>
> >>>>>>>>>>>> >>>> >> [1] http://gaborsomogyi.com/
> >>>>>>>>>>>> >>>> >> [2]
> >>>>>>>>>>>> >>>> >>
> >>>>>>>>>>>> >>>> >>
> >>>>>>>>>>>> >>>>
> >>>>>>>>>>>>
> https://docs.google.com/document/d/1NMPeJ9H0G49TGy3AzTVVJVKmYC0okwOtqLTSPnGqzHw/edit
> >>>>>>>>>>>> >>>> >>
> >>>>>>>>>>>> >>>>
> >>>>>>>>>>>> >>>>
> >>>>>>>>>>>>
> >>>>>>>>>>>
>
--
Konstantin Knauf
https://twitter.com/snntrable
https://github.com/knaufk
tually replace DataSet. We would collect ideas over the next couple
> of
> > weeks without any visible progress on the implementation.
> >
> > On Fri, May 21, 2021 at 2:06 PM Konstantin Knauf
> > wrote:
> >
> > > Hi Timo,
> > >
> > > Thanks for j
gt; >
> > I'm very happy to announce that Xintong Song has joined the Flink PMC!
> >
> > Congratulations and welcome Xintong!
> >
> > Best,
> > Dawid
>
--
Konstantin Knauf
https://twitter.com/snntrable
https://github.com/knaufk
; >
> > Qingsheng Ren
> > Email: renqs...@gmail.com
> > On Jun 16, 2021, 5:21 PM +0800, Dawid Wysakowicz ,
> wrote:
> > > Hi all!
> > >
> > > I'm very happy to announce that Arvid Heise has joined the Flink PMC!
> > >
> > > Congratu
Konstantin Knauf created FLINK-23002:
Summary: C# SDK for Stateful Functions
Key: FLINK-23002
URL: https://issues.apache.org/jira/browse/FLINK-23002
Project: Flink
Issue Type: New
What orders to sync data of all matched table? Sync data from all
> > matched tables one by one or at the same time?
> >
> >
> >
> > > AS select_statement: copy source table data into target
> >
> >
> >
> > User could explicitly specify the data type for each column in the CTAS,
> > what happened when run the following example. The demo is from MySQL
> > document,
> https://dev.mysql.com/doc/refman/5.6/en/create-table-select.html
> > , the result is a bit unexpected, I wonder
> >
> > What the behavior would be in Flink.
> >
> >
> > [image: image.png]
> >
> > Best,
> > JING ZHANG
> >
>
--
Konstantin Knauf
https://twitter.com/snntrable
https://github.com/knaufk
Konstantin Knauf created FLINK-22783:
Summary: Flink Jira Bot effectively does not apply all rules
anymore due to throtteling
Key: FLINK-22783
URL: https://issues.apache.org/jira/browse/FLINK-22783
Konstantin Knauf created FLINK-22772:
Summary: Add TestContext to Python SDK
Key: FLINK-22772
URL: https://issues.apache.org/jira/browse/FLINK-22772
Project: Flink
Issue Type: Sub-task
Konstantin Knauf created FLINK-22771:
Summary: Add TestContext to Java SDK
Key: FLINK-22771
URL: https://issues.apache.org/jira/browse/FLINK-22771
Project: Flink
Issue Type: Sub-task
Konstantin Knauf created FLINK-22750:
Summary: Simplify Unit Testing of Remote Functions
Key: FLINK-22750
URL: https://issues.apache.org/jira/browse/FLINK-22750
Project: Flink
Issue Type
bot for subtasks or extend the period to 30 days?
>
> The core problem in the past was that we had issues laying around
> untouched for years. Luckily, this is solved with the bot now. But going
> from years to 7 days spams the mail box quite a bit.
>
> Regards,
> Tim
until Monday I
> can also do it myself. However I'd appreciate if someone else could take
> care of tracking the progress of the issues we want to include in the
> release.
> >
> > Best,
> > Dawid
> >
> > On 20/05/2021 09:59, Konstantin Knauf wrote:
> &g
; > I like this idea. +1 for your proposal Konstantin.
> >
> > Cheers,
> > Till
> >
> > On Wed, May 19, 2021 at 1:30 PM Konstantin Knauf <
> konstan...@ververica.com
> > >
> > wrote:
> >
> > > Hi everyone,
> > >
> > &
> > >
> > > > Regards,
> > > > Timo
> > > >
> > > >
> > > > On 17.05.21 10:11, Robert Metzger wrote:
> > > > > Thanks a lot for starting the discussion about the release.
> > > > > I'd like to include
> >
n that "roll over" many tickets to the next release if
they have not made into the previous release although there is no concrete
plan to fix them or they have even become obsolete by then. Excluding those
from the bot would be counterproductive.
What do you think?
Cheers,
Konstantin
On Fri, Apr
notification per ticket which
includes the label/comment and status update, instead of separate ones.
Cheers,
Konstantin
--
Konstantin Knauf
https://twitter.com/snntrable
https://github.com/knaufk
,
Konstantin
--
Konstantin Knauf
https://twitter.com/snntrable
https://github.com/knaufk
proposing to start releasing 1.12.4 on next Monday.
> >
> > I'd volunteer as a release manager again.
> >
> > Best,
> >
> > Arvid
> >
> > [1] https://issues.apache.org/jira/browse/FLINK-22555
> >
>
>
--
Konstantin Knauf
https://twitter.com/snntrable
https://github.com/knaufk
Konstantin Knauf created FLINK-22570:
Summary: Reduce #notifications of Flink Jira Bot by combining API
requests
Key: FLINK-22570
URL: https://issues.apache.org/jira/browse/FLINK-22570
Project
Konstantin Knauf created FLINK-22569:
Summary: Limit #tickets touched by Flink Jira Bot per run
Key: FLINK-22569
URL: https://issues.apache.org/jira/browse/FLINK-22569
Project: Flink
Konstantin Knauf created FLINK-22430:
Summary: Increase "stale-assigned.stale-days" to 14
Key: FLINK-22430
URL: https://issues.apache.org/jira/browse/FLINK-22430
Project: Flink
Konstantin Knauf created FLINK-22429:
Summary: Exlude Sub-Tasks in all bot "stale-unassigned" rule of
Jira Bot
Key: FLINK-22429
URL: https://issues.apache.org/jira/browse/FLINK-22429
Nico
have provided feedback that this is too aggressive).
* exclude Sub-Tasks from all rules except the "stale-assigned" rule (I
think, this was just an oversight in the original discussion.)
Keep it coming.
Cheers,
Konstantin
--
Konstantin Knauf
https://twitter.com/snntrable
https://github.com/knaufk
quot;Deployment
/ Kubernetes", "Deployment / Mesos", "Deployment / YARN", flink-docker,
"Release System", "Runtime / Coordination", "Runtime / Metrics", "Runtime /
Queryable State", "Runtime / REST", Travis) AND resolution = Unresolved AND
labels in (stale-assigned) AND labels in (pull-request-available)
Cheers,
Konstantin
[1] https://github.com/apache/flink-jira-bot/blob/master/config.yaml
--
Konstantin Knauf
https://twitter.com/snntrable
https://github.com/knaufk
Konstantin Knauf created FLINK-22394:
Summary: Auto Close issues with resolution "Auto Closed" instead
of "Fixed"
Key: FLINK-22394
URL: https://issues.apache.org/jira/browse/FLINK-22394
Konstantin Knauf created FLINK-22391:
Summary: Support State Introspection
Key: FLINK-22391
URL: https://issues.apache.org/jira/browse/FLINK-22391
Project: Flink
Issue Type: New Feature
Konstantin Knauf created FLINK-22390:
Summary: Support for OpenTracing
Key: FLINK-22390
URL: https://issues.apache.org/jira/browse/FLINK-22390
Project: Flink
Issue Type: New Feature
Konstantin Knauf created FLINK-22389:
Summary: Expose Message Batches to SDKs
Key: FLINK-22389
URL: https://issues.apache.org/jira/browse/FLINK-22389
Project: Flink
Issue Type: New
; {stale.minor.warning_days} with a comment that encourages > users to
> watch, comment and simply reopen with a higher priority if the problem
> insists.
>
> Why is the ticket for said rule still open?
> https://issues.apache.org/jira/browse/FLINK-22032
>
> On 4/14/2021 12:06
of the next two weeks. So, don't be surprised
when you see more and more Flink Jira Bot activity in our Jira.
Please let me know if you have any questions.
Cheers,
Konstantin
--
Konstantin Knauf
https://twitter.com/snntrable
https://github.com/knaufk
[1] https://issues.apache.org/jira/browse/FLINK
when you see more and more Flink Jira Bot activity in our Jira.
Please let me know if you have any questions.
Cheers,
Konstantin
--
Konstantin Knauf
https://twitter.com/snntrable
https://github.com/knaufk
ailing-list-archive.1008284.n3.nabble.com/SURVEY-Remove-Mesos-support-td45974.html
> [2] https://flink.apache.org/roadmap.html#feature-radar
>
--
Konstantin Knauf
https://twitter.com/snntrable
https://github.com/knaufk
the same release cycle could be developed in sync with each
> other.
>
> Let me know what you think.
>
> Regards,
>
> Chesnay
>
>
--
Konstantin Knauf
https://twitter.com/snntrable
https://github.com/knaufk
Konstantin Knauf created FLINK-22231:
Summary: Add Rule to deprioritize Critical/Blocker Non-Bugs
Key: FLINK-22231
URL: https://issues.apache.org/jira/browse/FLINK-22231
Project: Flink
I’m targeting this vote
> > to
> > > last until April. 2nd, 12pm CET.
> > > It is adopted by majority approval, with at least 3 PMC affirmative
> > votes.
> > >
> > > Thanks,
> > > Gordon
> > >
> > > [1]
> > >
>
Konstantin Knauf created FLINK-22066:
Summary: Improve Error Message on unknown Ingress Type
Key: FLINK-22066
URL: https://issues.apache.org/jira/browse/FLINK-22066
Project: Flink
Issue
Konstantin Knauf created FLINK-22036:
Summary: Document Jira Bot in Flink Confluence
Key: FLINK-22036
URL: https://issues.apache.org/jira/browse/FLINK-22036
Project: Flink
Issue Type
Konstantin Knauf created FLINK-22034:
Summary: Add Rule 1
Key: FLINK-22034
URL: https://issues.apache.org/jira/browse/FLINK-22034
Project: Flink
Issue Type: Sub-task
Reporter
Konstantin Knauf created FLINK-22035:
Summary: Run Jira Bot periodically with Github Actions
Key: FLINK-22035
URL: https://issues.apache.org/jira/browse/FLINK-22035
Project: Flink
Issue
Konstantin Knauf created FLINK-22033:
Summary: Add Rule 2
Key: FLINK-22033
URL: https://issues.apache.org/jira/browse/FLINK-22033
Project: Flink
Issue Type: Sub-task
Reporter
Konstantin Knauf created FLINK-22032:
Summary: Implement Rule 3 First (Simplest Rule)
Key: FLINK-22032
URL: https://issues.apache.org/jira/browse/FLINK-22032
Project: Flink
Issue Type
Konstantin Knauf created FLINK-22030:
Summary: Add Technical Debt Issue Type
Key: FLINK-22030
URL: https://issues.apache.org/jira/browse/FLINK-22030
Project: Flink
Issue Type: Sub-task
Konstantin Knauf created FLINK-22029:
Summary: Remove "Task", "Wish" and "Test" Issue Types
Key: FLINK-22029
URL: https://issues.apache.org/jira/browse/FLINK-22029
Konstantin Knauf created FLINK-22028:
Summary: Remove "Trivial" Ticket Priority
Key: FLINK-22028
URL: https://issues.apache.org/jira/browse/FLINK-22028
Project: Flink
Issue
Konstantin Knauf created FLINK-22027:
Summary: Create separate repository for Jira Bot
Key: FLINK-22027
URL: https://issues.apache.org/jira/browse/FLINK-22027
Project: Flink
Issue Type
Konstantin Knauf created FLINK-22026:
Summary: Improved Jira Process & Bot
Key: FLINK-22026
URL: https://issues.apache.org/jira/browse/FLINK-22026
Project: Flink
Issue Type: Improve
--
Konstantin Knauf
https://twitter.com/snntrable
https://github.com/knaufk
lease
> Manager that this ticket must be completed. How would that work in your
> proposal?
>
> On 3/26/2021 9:18 AM, Konstantin Knauf wrote:
>
> Hi Chesnay,
>
> a blocker is currently defined in the Flink Confluence as a "needs to be
> resolved before a release (matched
n stale for
> months.
>
> On 3/26/2021 8:46 AM, Konstantin Knauf wrote:
> > Hi Arvid,
> >
> > I agree that this should never happen for blockers. My thinking was that
> if
> > an unassigned blocker is deprioritized after 1 day it also forces us to
> >
olved with urgency, I also cannot imagine a blocker going
> completely stale, so we probably talk about something that never happens in
> reality. For other tickets, it makes sense.
>
> On Tue, Mar 23, 2021 at 8:09 AM Konstantin Knauf
> wrote:
>
> > Hi everyone,
> >
>
and would like to kick off the release candidates early
> next week.
>
> Please let us know if you have any concerns.
>
> Thanks,
> Gordon
>
> [1] https://github.com/apache/flink-statefun-playground
>
--
Konstantin Knauf
https://twitter.com/snntrable
https://github.com/knaufk
s handle the Mesos piece as well when they touch
> the
> >> resource managers?)
> >> >
> >> >
> >> >
> >> > Thanks,
> >> >
> >> >
> >> >
> >> > -- Piyush
> >> >
> >> >
> >> >
> >> >
> >> >
> >> > From: Till Rohrmann >> trohrm...@apache.org>>
> >> > Date: Friday, October 23, 2020 at 8:19 AM
> >> > To: Xintong Song >> tonysong...@gmail.com>>
> >> > Cc: dev mailto:dev@flink.apache.org>>,
> user <
> >> u...@flink.apache.org<mailto:u...@flink.apache.org>>
> >> > Subject: Re: [SURVEY] Remove Mesos support
> >> >
> >> >
> >> >
> >> > Thanks for starting this survey Robert! I second Konstantin and
> >> Xintong in the sense that our Mesos user's opinions should matter most
> >> here. If our community is no longer using the Mesos integration, then I
> >> would be +1 for removing it in order to decrease the maintenance burden.
> >> >
> >> >
> >> >
> >> > Cheers,
> >> >
> >> > Till
> >> >
> >> >
> >> >
> >> > On Fri, Oct 23, 2020 at 2:03 PM Xintong Song <
> tonysong...@gmail.com
> >> <mailto:tonysong...@gmail.com>> wrote:
> >> >
> >> > +1 for adding a warning in 1.12 about planning to remove Mesos
> >> support.
> >> >
> >> >
> >> >
> >> > With my developer hat on, removing the Mesos support would
> >> definitely reduce the maintaining overhead for the deployment and
> resource
> >> management related components. On the other hand, the Flink on Mesos
> users'
> >> voices definitely matter a lot for this community. Either way, it would
> be
> >> good to draw users attention to this discussion early.
> >> >
> >> >
> >> >
> >> > Thank you~
> >> >
> >> > Xintong Song
> >> >
> >> >
> >> >
> >> >
> >> >
> >> > On Fri, Oct 23, 2020 at 7:53 PM Konstantin Knauf <
> kna...@apache.org
> >> <mailto:kna...@apache.org>> wrote:
> >> >
> >> > Hi Robert,
> >> >
> >> > +1 to the plan you outlined. If we were to drop support in Flink
> >> 1.13+, we
> >> > would still support it in Flink 1.12- with bug fixes for some time
> >> so that
> >> > users have time to move on.
> >> >
> >> > It would certainly be very interesting to hear from current Flink
> >> on Mesos
> >> > users, on how they see the evolution of this part of the
> ecosystem.
> >> >
> >> > Best,
> >> >
> >> > Konstantin
> >>
> >
>
--
Konstantin Knauf
https://twitter.com/snntrable
https://github.com/knaufk
we collected in [1]
> sound
> > good. I'm looking forward to trying it out.
> > +1 from my side.
> >
> > Best,
> > Matthias
> >
> > [1]
> >
> https://lists.apache.org/thread.html/re7affbb1357ce4986a7770b0052c39c9a26ebd7cd0df3f15ed320781%40%3Cdev.fl
Konstantin Knauf created FLINK-21948:
Summary: Show Watermarks in Human Friendly Format in the Web User
Interface
Key: FLINK-21948
URL: https://issues.apache.org/jira/browse/FLINK-21948
Project
/re7affbb1357ce4986a7770b0052c39c9a26ebd7cd0df3f15ed320781%40%3Cdev.flink.apache.org%3E
[2]
https://docs.google.com/document/d/19VmykDSn4BHgsCNTXtN89R7xea8e3cUIl-uivW8L6W8/edit#
--
Konstantin Knauf
https://twitter.com/snntrable
https://github.com/knaufk
Hi everyone,
The discussion has stalled a bit on this thread. I would proceed to a vote
on the currently documented proposal tomorrow if there are no further
concerns or opinions.
Best,
Konstantin
On Fri, Mar 12, 2021 at 5:24 PM Konstantin Knauf wrote:
> Hi Leonard,
>
> Thank you
Konstantin Knauf created FLINK-21855:
Summary: Document Metrics Limitations
Key: FLINK-21855
URL: https://issues.apache.org/jira/browse/FLINK-21855
Project: Flink
Issue Type: Sub-task
Konstantin Knauf created FLINK-21844:
Summary: Do not auto-configure maxParallelism when setting
"scheduler-mode: reactive"
Key: FLINK-21844
URL: https://issues.apache.org/jira/browse/F
ized-blocker` in rule 1 details
> should
> > be `auto-deprioritized-critical/major`.
> >
> > Thank you~
> >
> > Xintong Song
> >
> >
> >
> > On Fri, Mar 5, 2021 at 7:33 PM Konstantin Knauf
> wrote:
> >
> >> Hi everyone,
> >
t can be a good step towards managing
> > technical debt in some other way, like wiki.
> >
> > Thanks!
> >
> > Regards,
> > Roman
> >
> >
> > On Tue, Mar 2, 2021 at 9:32 AM Dawid Wysakowicz
> > wrote:
> >
> > > I'd be fine with
on it in the near future?
> Another approach would be some wiki space.
>
> As for the trivial priority, I would remove it and (use labels where
> appropriate) as you suggested.
>
> Regards,
> Roman
>
>
> On Mon, Mar 1, 2021 at 11:53 AM Konstantin Knauf >
> wrote:
herefore it
> will automatically priortise the tasks according to failure frequencies.
>
> Best,
>
> Dawid
>
> On 01/03/2021 09:38, Konstantin Knauf wrote:
> > Hi Xintong,
> >
> > yes, such labels would make a lot of sense. I added a sentence to the
>
this would lessen the exposure to the various Flink areas
> for lists maintainers.
>
> What do you think?
>
> Regards,
> Roman
>
--
Konstantin Knauf | Head of Product
+49 160 91394525
Follow us @VervericaData Ververica <https://www.ververica.com/>
--
Join Flink Fo
hanks for starting this discussion Konstantin. I like your proposal and
> > also the idea of automating the tedious parts of it via a bot.
> >
> > Cheers,
> > Till
> >
> > On Fri, Feb 26, 2021 at 4:17 PM Konstantin Knauf
> > wrote:
> >
> > > Dear Flink
hread.html/rd34fb695d371c2bf0cbd1696ce190bac35dd78f29edd8c60d0c7ee71%40%3Cdev.flink.apache.org%3E
[2]
https://cwiki.apache.org/confluence/display/FLINK/FLINK+Jira+field+definitions
--
Konstantin Knauf
https://twitter.com/snntrable
https://github.com/knaufk
rg/confluence/display/FLINK/FLIP-158%3A+Generalized+incremental+checkpoints
>
--
Konstantin Knauf
https://twitter.com/snntrable
https://github.com/knaufk
om: http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/
>
--
Konstantin Knauf | Head of Product
+49 160 91394525
Follow us @VervericaData Ververica <https://www.ververica.com/>
--
Join Flink Forward <https://flink-forward.org/> - The Apache Flink
Confe
kewness-can-reduce-checkpoint-failures-and-task-manager-crashes
Cheers,
Konstantin
--
Konstantin Knauf
https://twitter.com/snntrable
https://github.com/knaufk
Zhu-Zhu-tp45418p45474.html
[21]
https://flink.apache.org/2020/10/15/from-aligned-to-unaligned-checkpoints-part-1.html
[22]
https://flink.apache.org/news/2020/10/13/stateful-serverless-internals.html
Cheers,
Konstantin
--
Konstantin Knauf
https://twitter.com/snntrable
https://github.com/knaufk
; --
>
> long is the way and hard that out of Hell leads up to light
>
--
Konstantin Knauf
https://twitter.com/snntrable
https://github.com/knaufk
Hi Robert,
+1 to the plan you outlined. If we were to drop support in Flink 1.13+, we
would still support it in Flink 1.12- with bug fixes for some time so that
users have time to move on.
It would certainly be very interesting to hear from current Flink on Mesos
users, on how they see the
sert-kafka+Connector
> > [2]
> >
> >
> https://lists.apache.org/thread.html/r83e3153377594276b2066e49e399ec05d127b58bb4ce0fde33309da2%40%3Cdev.flink.apache.org%3E
> >
>
--
Konstantin Knauf
https://twitter.com/snntrable
https://github.com/knaufk
s a
> > > dimension table in temporal join.
> > >
> > > >Introduce a new connector vs introduce a new property
> > > The main reason behind is that the KTable connector almost has no
> common
> > > options with the Kafka connector. The option
> the KTable notion in Kafka Stream.
> >> > >
> >> > > FLIP-149:
> >> > >
> >> > >
> >>
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-149%3A+Introduce+the+KTable+Connector
> >> > >
> >> > > Currently many users have expressed their needs for the upsert Kafka
> >> by
> >> > > mail lists and issues. The KTable connector has several benefits for
> >> users:
> >> > >
> >> > > 1. Users are able to interpret a compacted Kafka Topic as an upsert
> >> stream
> >> > > in Apache Flink. And also be able to write a changelog stream to
> Kafka
> >> > > (into a compacted topic).
> >> > > 2. As a part of the real time pipeline, store join or aggregate
> >> result (may
> >> > > contain updates) into a Kafka topic for further calculation;
> >> > > 3. The semantic of the KTable connector is just the same as KTable
> in
> >> Kafka
> >> > > Stream. So it's very handy for Kafka Stream and KSQL users. We have
> >> seen
> >> > > several questions in the mailing list asking how to model a KTable
> >> and how
> >> > > to join a KTable in Flink SQL.
> >> > >
> >> > > We hope it can expand the usage of the Flink with Kafka.
> >> > >
> >> > > I'm looking forward to your feedback.
> >> > >
> >> > > Best,
> >> > > Shengkai
> >> > >
> >> >
> >> >
> >> > --
> >> > Best, Jingsong Lee
> >>
> >
>
--
Konstantin Knauf
https://twitter.com/snntrable
https://github.com/knaufk
Konstantin Knauf created FLINK-19695:
Summary: Writing Table with RowTime Column of type TIMESTAMP(3) to
Kafka fails with ClassCastException
Key: FLINK-19695
URL: https://issues.apache.org/jira/browse/FLINK
t;>> favor of the relatively recently introduced StreamingFileSink.
> >>>
> >>> For the sake of a clean and more manageable codebase, I propose to
> >>> remove this module for release-1.12, but of course we should see first
> >>> if there are any u
Konstantin Knauf created FLINK-19495:
Summary: Add documentation for avro-confluent format
Key: FLINK-19495
URL: https://issues.apache.org/jira/browse/FLINK-19495
Project: Flink
Issue
Konstantin Knauf created FLINK-19418:
Summary: Inline PRIMARY KEY constraint should be invalid
Key: FLINK-19418
URL: https://issues.apache.org/jira/browse/FLINK-19418
Project: Flink
> > > Hello,
> > >
> > > I'd like to kickoff the next release of flink-shaded, which will
> contain
> > > a bump to netty (4.1.49) and snakeyaml (1.27).
> > >
> > > Any concerns? Any other dependency people want upgrade for the 1.12?
> > >
> > >
> >
>
--
Konstantin Knauf
https://twitter.com/snntrable
https://github.com/knaufk
witter.com/FlinkForward/status/1306219099475902464
Cheers,
Konstantin
--
Konstantin Knauf
https://twitter.com/snntrable
https://github.com/knaufk
t away.
> >>>>>
> >>>>> Best,
> >>>>> Stephan
> >>>>>
> >>>>>
> >>>>> On Tue, Sep 8, 2020 at 3:25 PM Becket Qin
> >>>>> wrote:
> >>>>>
> >>&g
]
https://www.ververica.com/blog/a-deep-dive-on-change-data-capture-with-flink-sql-during-flink-forward
Cheers,
Konstantin
--
Konstantin Knauf
https://twitter.com/snntrable
https://github.com/knaufk
display/FLINK/FLIP-142%3A+Disentangle+StateBackends+from+Checkpointing
>
--
Konstantin Knauf
https://twitter.com/snntrable
https://github.com/knaufk
until 2015.
> > > Besides his work on the code, he has been driving initiatives on dev@
> > list,
> > > supporting users and giving talks at conferences.
> > >
> > > Please join me in congratulating Niels for becoming a Flink committer!
> > >
&
com/DISCUSS-FLIP-107-Reading-table-columns-from-different-parts-of-source-records-td38277.html
> >>
> >> [2]
> >>
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-107%3A+Handling+of+metadata+in+SQL+connectors
> >>
> >
>
>
--
Konstantin Knauf
https://twitter.com/snntrable
https://github.com/knaufk
fluence/display/FLINK/FLIP-142%3A+Disentangle+StateBackends+from+Checkpointing
>
>
> I look forward to a healthy discussion.
>
>
> Seth
>
--
Konstantin Knauf
https://twitter.com/snntrable
https://github.com/knaufk
Hi Becket,
Thank you for picking up this FLIP. I have a few questions:
* two thoughts on naming:
* idleTime: In the meantime, a similar metric "idleTimeMsPerSecond" has
been introduced in https://issues.apache.org/jira/browse/FLINK-16864. They
have a similar name, but different definitions of
Konstantin
--
Konstantin Knauf
https://twitter.com/snntrable
https://github.com/knaufk
Konstantin Knauf created FLINK-19160:
Summary: When backpressured
AsyncWaitOperator/ContinousFileReaderOperator are not idle
Key: FLINK-19160
URL: https://issues.apache.org/jira/browse/FLINK-19160
Konstantin Knauf created FLINK-19149:
Summary: Compacted Kafka Topic can be interpreted as Changelog
Stream
Key: FLINK-19149
URL: https://issues.apache.org/jira/browse/FLINK-19149
Project: Flink
;>>>>>> should be
> >>>>>> value.fields-include. Which I think you also suggested in the
> >>>>>> comment,
> >>>>>> right?
> >>>>>>>> As for the cast vs declaring output type of computed column. I
> >>>>>>>> think
> >>>>>> it's better not to use CAST, but declare a type of an expression and
> >>>>> later
> >>>>>> on infer the output type of SYSTEM_METADATA. The reason is I think
> >>>>>> this
> >>>>> way
> >>>>>> it will be easier to implement e.g. filter push downs when working
> >>>>>> with
> >>>>> the
> >>>>>> native types of the source, e.g. in case of Kafka's offset, i
> >>>>>> think it's
> >>>>>> better to pushdown long rather than string. This could let us push
> >>>>>> expression like e.g. offset > 12345 & offset < 59382. Otherwise we
> >>>>>> would
> >>>>>> have to push down cast(offset, long) > 12345 && cast(offset, long) <
> >>>>> 59382.
> >>>>>> Moreover I think we need to introduce the type for computed columns
> >>>>> anyway
> >>>>>> to support functions that infer output type based on expected return
> >>>>> type.
> >>>>>>>> As for the computed column push down. Yes, SYSTEM_METADATA would
> >>>>>>>> have
> >>>>>> to be pushed down to the source. If it is not possible the planner
> >>>>> should
> >>>>>> fail. As far as I know computed columns push down will be part of
> >>>>>> source
> >>>>>> rework, won't it? ;)
> >>>>>>>> As for the persisted computed column. I think it is completely
> >>>>>> orthogonal. In my current proposal you can also partition by a
> >>>>>> computed
> >>>>>> column. The difference between using a udf in partitioned by vs
> >>>>> partitioned
> >>>>>> by a computed column is that when you partition by a computed column
> >>>>> this
> >>>>>> column must be also computed when reading the table. If you use a
> >>>>>> udf in
> >>>>>> the partitioned by, the expression is computed only when inserting
> >>>>>> into
> >>>>> the
> >>>>>> table.
> >>>>>>>> Hope this answers some of your questions. Looking forward for
> >>>>>>>> further
> >>>>>> suggestions.
> >>>>>>>> Best,
> >>>>>>>> Dawid
> >>>>>>>>
> >>>>>>>>
> >>>>>>>> On 02/03/2020 05:18, Jark Wu wrote:
> >>>>>>>>> Hi,
> >>>>>>>>>
> >>>>>>>>> Thanks Dawid for starting such a great discussion. Reaing
> metadata
> >>>>> and
> >>>>>>>>> key-part information from source is an important feature for
> >>>>> streaming
> >>>>>>>>> users.
> >>>>>>>>>
> >>>>>>>>> In general, I agree with the proposal of the FLIP.
> >>>>>>>>> I will leave my thoughts and comments here:
> >>>>>>>>>
> >>>>>>>>> 1) +1 to use connector properties instead of introducing HEADER
> >>>>>> keyword as
> >>>>>>>>> the reason you mentioned in the FLIP.
> >>>>>>>>> 2) we already introduced PARTITIONED BY in FLIP-63. Maybe we
> >>>>>>>>> should
> >>>>>> add a
> >>>>>>>>> section to explain what's the relationship between them.
> >>>>>>>>> Do their concepts conflict? Could INSERT PARTITION be used
> >>>>>>>>> on the
> >>>>>>>>> PARTITIONED table in this FLIP?
> >>>>>>>>> 3) Currently, properties are hierarchical in Flink SQL. Shall we
> >>>>> make
> >>>>>> the
> >>>>>>>>> new introduced properties more hierarchical?
> >>>>>>>>> For example, "timestamp" => "connector.timestamp"?
> >>>>>>>>> (actually, I
> >>>>>> prefer
> >>>>>>>>> "kafka.timestamp" which is another improvement for properties
> >>>>>> FLINK-12557)
> >>>>>>>>> A single "timestamp" in properties may mislead users that
> the
> >>>>> field
> >>>>>> is
> >>>>>>>>> a rowtime attribute.
> >>>>>>>>>
> >>>>>>>>> I also left some minor comments in the FLIP.
> >>>>>>>>>
> >>>>>>>>> Thanks,
> >>>>>>>>> Jark
> >>>>>>>>>
> >>>>>>>>>
> >>>>>>>>>
> >>>>>>>>> On Sun, 1 Mar 2020 at 22:30, Dawid Wysakowicz <
> >>>>> dwysakow...@apache.org>
> >>>>>>>>> wrote:
> >>>>>>>>>
> >>>>>>>>>> Hi,
> >>>>>>>>>>
> >>>>>>>>>> I would like to propose an improvement that would enable reading
> >>>>> table
> >>>>>>>>>> columns from different parts of source records. Besides the main
> >>>>>> payload
> >>>>>>>>>> majority (if not all of the sources) expose additional
> >>>>> information. It
> >>>>>>>>>> can be simply a read-only metadata such as offset, ingestion
> time
> >>>>> or a
> >>>>>>>>>> read and write parts of the record that contain data but
> >>>>> additionally
> >>>>>>>>>> serve different purposes (partitioning, compaction etc.), e.g.
> >>>>>>>>>> key
> >>>>> or
> >>>>>>>>>> timestamp in Kafka.
> >>>>>>>>>>
> >>>>>>>>>> We should make it possible to read and write data from all of
> >>>>>>>>>> those
> >>>>>>>>>> locations. In this proposal I discuss reading partitioning data,
> >>>>> for
> >>>>>>>>>> completeness this proposal discusses also the partitioning when
> >>>>>> writing
> >>>>>>>>>> data out.
> >>>>>>>>>>
> >>>>>>>>>> I am looking forward to your comments.
> >>>>>>>>>>
> >>>>>>>>>> You can access the FLIP here:
> >>>>>>>>>>
> >>>>>>>>>>
> >>>>>>
> >>>>>
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-107%3A+Reading+table+columns+from+different+parts+of+source+records?src=contextnavpagetreemode
> >>>>>
> >>>>>>>>>>
> >>>>>>>>>> Best,
> >>>>>>>>>>
> >>>>>>>>>> Dawid
> >>>>>>>>>>
> >>>>>>>>>>
> >>>>>>>>>>
> >>>>>>
> >>>>>>
> >>>>>
> >>>>
> >>>
> >>
> >
>
>
--
Konstantin Knauf
https://twitter.com/snntrable
https://github.com/knaufk
ersion previous
> > > jdk8u72(-b01)
> > > - FLINK-17075 Add task status reconciliation between TM and JM
> > >
> > > Furthermore, I think the following blocker issue should be merged
> before
> > > 1.11.2 release
> > >
> > > - FLI
t-archive.1008284.n3.nabble.com/ANNOUNCE-New-PMC-member-Dian-Fu-tp44170p44240.html
Cheers,
Konstantin
--
Konstantin Knauf
https://twitter.com/snntrable
https://github.com/knaufk
cwiki.apache.org/confluence/display/FLINK/FLIP-63%3A+Rework+table+partition+support
> [2]
>
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-107%3A+Reading+table+columns+from+different+parts+of+source+records
> [3]http://iceberg.apache.org/partitioning/
> [4]https://oracle-base.com/articles/8i/partitioned-tables-and-indexes
>
> Best,
> Jingsong
>
--
Konstantin Knauf
https://twitter.com/snntrable
https://github.com/knaufk
uch as
> >> FoldingDescriptor, FoldFunction, ...)
> >> - DataStream#split
> >>
> >> This was discussed in
> >>
> https://lists.apache.org/thread.html/rf37cd0e00e9adb917b7b75275af2370ec2f3970d17a4abd0db7ead31%40%3Cdev.flink.apache.org%3E
> >>
munity!
> >
> > Marta
> >
> > [1] https://developers.google.com/season-of-docs/docs/participants
> > [2] https://github.com/KKcorps
> > [3] https://www.linkedin.com/in/haseebasif/
> > [4] https://2020.beamsummit.org/sessions/nexmark-beam-flinkndb/
> >
>
--
Konstantin Knauf
https://twitter.com/snntrable
https://github.com/knaufk
orward.org/global-2020/conference-program
[24]
https://www.eventbrite.com/e/flink-forward-global-virtual-2020-tickets-113775477516#tickets
Cheers,
Konstantin
--
Konstantin Knauf
https://twitter.com/snntrable
https://github.com/knaufk
> > enableCheckpointing()
>>>>
>>>> > isForceCheckpointing()
>>>>
>>>> >
>>>>
>>>> > readFile(FileInputFormat inputFormat,String
>>>>
>>>> > filePath,FileProcessingMode watchType,long interval, FilePathFilter
>>>>
>>>> > filter)
>>>>
>>>> > readFileStream(...)
>>>>
>>>> >
>>>>
>>>> > socketTextStream(String hostname, int port, char delimiter, long
>>>> maxRetry)
>>>>
>>>> > socketTextStream(String hostname, int port, char delimiter)
>>>>
>>>> >
>>>>
>>>> > There are more, like the (get)/setNumberOfExecutionRetries() that were
>>>>
>>>> > deprecated long ago, but I have not investigated to see if they are
>>>>
>>>> > actually easy to remove.
>>>>
>>>> >
>>>>
>>>> > Cheers,
>>>>
>>>> > Kostas
>>>>
>>>> >
>>>>
>>>> > On Mon, Aug 17, 2020 at 10:53 AM Dawid Wysakowicz
>>>>
>>>> > wrote:
>>>>
>>>> >
>>>>
>>>> > Hi devs and users,
>>>>
>>>> >
>>>>
>>>> > I wanted to ask you what do you think about removing some of the
>>>> deprecated APIs around the DataStream API.
>>>>
>>>> >
>>>>
>>>> > The APIs I have in mind are:
>>>>
>>>> >
>>>>
>>>> > RuntimeContext#getAllAccumulators (deprecated in 0.10)
>>>>
>>>> > DataStream#fold and all related classes and methods such as
>>>> FoldFunction, FoldingState, FoldingStateDescriptor ... (deprecated in
>>>> 1.3/1.4)
>>>>
>>>> > StreamExecutionEnvironment#setStateBackend(AbstractStateBackend)
>>>> (deprecated in 1.5)
>>>>
>>>> > DataStream#split (deprecated in 1.8)
>>>>
>>>> > Methods in (Connected)DataStream that specify keys as either indices
>>>> or field names such as DataStream#keyBy, DataStream#partitionCustom,
>>>> ConnectedStream#keyBy, (deprecated in 1.11)
>>>>
>>>> >
>>>>
>>>> > I think the first three should be straightforward. They are long
>>>> deprecated. The getAccumulators method is not used very often in my
>>>> opinion. The same applies to the DataStream#fold which additionally is not
>>>> very performant. Lastly the setStateBackend has an alternative with a class
>>>> from the AbstractStateBackend hierarchy, therefore it will be still code
>>>> compatible. Moreover if we remove the
>>>> #setStateBackend(AbstractStateBackend) we will get rid off warnings users
>>>> have right now when setting a statebackend as the correct method cannot be
>>>> used without an explicit casting.
>>>>
>>>> >
>>>>
>>>> > As for the DataStream#split I know there were some objections against
>>>> removing the #split method in the past. I still believe the output tags can
>>>> replace the split method already.
>>>>
>>>> >
>>>>
>>>> > The only problem in the last set of methods I propose to remove is
>>>> that they were deprecated only in the last release and those method were
>>>> only partially deprecated. Moreover some of the methods were not deprecated
>>>> in ConnectedStreams. Nevertheless I'd still be inclined to remove the
>>>> methods in this release.
>>>>
>>>> >
>>>>
>>>> > Let me know what do you think about it.
>>>>
>>>> >
>>>>
>>>> > Best,
>>>>
>>>> >
>>>>
>>>> > Dawid
>>>
>>>
--
Konstantin Knauf
https://twitter.com/snntrable
https://github.com/knaufk
es, I know that we had multiple discussions like this in the past but I'm
> trying to gauge the current sentiment.
>
> I'm cross-posting to the user-ml since this is important for both users
> and developers.
>
> Best,
> Aljoscha
>
> [1] https://issues.apache.org/jira/browse/FLINK-17260
>
>
>
--
Konstantin Knauf
https://twitter.com/snntrable
https://github.com/knaufk
ward <https://flink-forward.org/> - The Apache Flink
> Conference
>
> Stream Processing | Event Driven | Real Time
>
> --
>
> Ververica GmbH | Invalidenstrasse 115, 10115 Berlin, Germany
>
> --
> Ververica GmbH
> Registered at Amtsgericht Charlottenburg: HRB 158244 B
> Managing Directors: Timothy Alexander Steinert, Yip Park Tung Jason, Ji
> (Toni) Cheng
>
>
>
> --
> Best Regards
>
> Jeff Zhang
>
>
--
Konstantin Knauf
https://twitter.com/snntrable
https://github.com/knaufk
201 - 300 of 470 matches
Mail list logo