Re: Flink 1.7 Development Priorities
I think Kafka 1.0/1.1/2.0 connector can be part of the Flink 1.7 release.
The current kafka 1.0 connector PR has been submitted.
I am refactoring the existing kafka connector test code to reduce the
amount of duplicate code.
Coo1 min <gjying1314@xxxxxxxxx> 于2018年8月23日周四 下午5:59写道：
> I am concerned about the progress of CEP library development, Can the
> following two main feature be kicked off and be involed in Flink1.7?
> 1) integration of CEP & SQL
> 2) dynamic change of CEP pattern without the downtime
> And i am willing to contribute to this, thx.
> Aljoscha Krettek <aljoscha@xxxxxxxxxx> 于2018年8月23日周四 下午4:12写道：
> > Hi Everyone,
> > After the recent Flink 1.6 release the people working on Flink at data
> > Artisans came together to talk about what we want to work on for Flink
> > The following is a list of high-level directions that we will be working
> > for the next couple of months. This doesn't mean that other things are
> > important or maybe more important, so please chime in.
> > That being said, here's the high-level list:
> > - make the Rest API versioned
> > - provide docker-compose based quickstarts for Flink SQL
> > - support writing to S3 in the new streaming file sink
> > - add a new type of join that allows "joining streams with tables"
> > - Scala 2.12 support
> > - improvements to resource scheduling, local recovery
> > - improved support for running Flink in containers, Flink dynamically
> > reacting to changes in the container deployment
> > - automatic rescaling policies
> > - initial support for state migration, i.e. changing the
> > schema/TypeSerializer of Flink State
> > This is also an invitation for others to post what they would like to
> > on and also to point out glaring omissions.
> > Best,
> > Aljoscha