Re: [DISCUSS] Flink SQL DDL Design
+1. Thanks for putting the proposal together Shuyi.
DDL has been brought up in a couple of times previously [1,2]. Utilizing
DDL will definitely be a great extension to the current Flink SQL to
systematically support some of the previously brought up features such as
. And it will also be beneficial to see the document closely aligned
with the previous discussion for unified SQL connector API .
I also left a few comments on the doc. Looking forward to the alignment
with the other couple of efforts and contributing to them!
On Fri, Nov 2, 2018 at 10:22 AM Bowen Li <bowenli86@xxxxxxxxx> wrote:
> Thanks Shuyi!
> I left some comments there. I think the design of SQL DDL and Flink-Hive
> integration/External catalog enhancements will work closely with each
> other. Hope we are well aligned on the directions of the two designs, and I
> look forward to working with you guys on both!
> On Thu, Nov 1, 2018 at 10:57 PM Shuyi Chen <suez1224@xxxxxxxxx> wrote:
> > Hi everyone,
> > SQL DDL support has been a long-time ask from the community. Current
> > SQL support only DML (e.g. SELECT and INSERT statements). In its current
> > form, Flink SQL users still need to define/create table sources and sinks
> > programmatically in Java/Scala. Also, in SQL Client, without DDL support,
> > the current implementation does not allow dynamical creation of table,
> > or functions with SQL, this adds friction for its adoption.
> > I drafted a design doc  with a few other community members that
> > the design and implementation for adding DDL support in Flink. The
> > design considers DDL for table, view, type, library and function. It will
> > be great to get feedback on the design from the community, and align with
> > latest effort in unified SQL connector API  and Flink Hive
> > .
> > Any feedback is highly appreciated.
> > Thanks
> > Shuyi Chen
> > 
> > 
> > 
> > --
> > "So you have to trust that the dots will somehow connect in your future."