Hi,

I think Kafka 1.0/1.1/2.0 connector can be part of the Flink 1.7 release.
The current kafka 1.0 connector PR has been submitted.
I am refactoring the existing kafka connector test code to reduce the
amount of duplicate code.

Thanks, vino.

Coo1 min <gjying1...@gmail.com> 于2018年8月23日周四 下午5:59写道:

> Hi,
>
> I am concerned about the progress of CEP library development, Can the
> following two main feature be kicked off and be involed in Flink1.7?
>
> 1) integration of CEP & SQL
> 2) dynamic change of CEP pattern without the downtime
>
> And i am willing to contribute to this, thx.
>
> Aljoscha Krettek <aljos...@apache.org> 于2018年8月23日周四 下午4:12写道:
>
> > Hi Everyone,
> >
> > After the recent Flink 1.6 release the people working on Flink at data
> > Artisans came together to talk about what we want to work on for Flink
> 1.7.
> > The following is a list of high-level directions that we will be working
> on
> > for the next couple of months. This doesn't mean that other things are
> not
> > important or maybe more important, so please chime in.
> >
> > That being said, here's the high-level list:
> >  - make the Rest API versioned
> >  - provide docker-compose based quickstarts for Flink SQL
> >  - support writing to S3 in the new streaming file sink
> >  - add a new type of join that allows "joining streams with tables"
> >  - Scala 2.12 support
> >  - improvements to resource scheduling, local recovery
> >  - improved support for running Flink in containers, Flink dynamically
> > reacting to changes in the container deployment
> >  - automatic rescaling policies
> >  - initial support for state migration, i.e. changing the
> > schema/TypeSerializer of Flink State
> >
> > This is also an invitation for others to post what they would like to
> work
> > on and also to point out glaring omissions.
> >
> > Best,
> > Aljoscha
>

Reply via email to