I'm wondering if you folks still have the bandwidth working on this.
We have some dedicated resource and like to move this forward. We can
日 期：2018年11月05日 11:15:35
主 题：Re: [DISCUSS] Flink SQL DDL Design
Hi, Shuyi, thanks for the proposal.
I have two concerns about the table ddl:
1. how about remove the source/sink mark from the ddl, because it is not
necessary, the framework determine the table referred is a source or a sink
according to the context of the query using the table. it will be more
convenient for use defining a table which can be both a source and sink,
and more convenient for catalog to persistent and manage the meta infos.
2. how about just keeping one pure string map as parameters for table, like
create tabe Kafka10SourceTable (
) with (
connector.type = ’kafka’,
connector.property-version = ’1’,
connector.version = ’0.10’,
connector.properties.topic = ‘test-kafka-topic’,
connector.properties.startup-mode = ‘latest-offset’,
connector.properties.specific-offset = ‘offset’,
format.type = 'json'
format.derive-schema = 'true'
1. in TableFactory, what user use is a string map properties, defining
parameters by string-map can be the closest way to mapping how user use the
2. The table descriptor can be extended by user, like what is done in Kafka
and Json, it means that the parameter keys in connector or format can be
different in different implementation, we can not restrict the key in a
specified set, so we need a map in connector scope and a map in
connector.properties scope. why not just give user a single map, let them
put parameters in a format they like, which is also the simplest way to
implement DDL parser.
3. whether we can define a format clause or not, depends on the
implementation of the connector, using different clause in DDL may make a
misunderstanding that we can combine the connectors with arbitrary formats,
which may not work actually.
On Sun, 4 Nov 2018 at 18:25, Dominik Wosiński <wossyn@xxxxxxxxx> wrote:
+1, Thanks for the proposal.
I guess this is a long-awaited change. This can vastly increase the
functionalities of the SQL Client as it will be possible to use complex
extensions like for example those provided by Apache Bahir.
sob., 3 lis 2018 o 17:17 Rong Rong <walterddr@xxxxxxxxx> napisał(a):
+1. Thanks for putting the proposal together Shuyi.
DDL has been brought up in a couple of times previously [1,2].
DDL will definitely be a great extension to the current Flink SQL to
systematically support some of the previously brought up features such
. And it will also be beneficial to see the document closely aligned
with the previous discussion for unified SQL connector API .
I also left a few comments on the doc. Looking forward to the alignment
with the other couple of efforts and contributing to them!
On Fri, Nov 2, 2018 at 10:22 AM Bowen Li <bowenli86@xxxxxxxxx> wrote:
I left some comments there. I think the design of SQL DDL and
integration/External catalog enhancements will work closely with each
other. Hope we are well aligned on the directions of the two designs,
look forward to working with you guys on both!
On Thu, Nov 1, 2018 at 10:57 PM Shuyi Chen <suez1224@xxxxxxxxx>
SQL DDL support has been a long-time ask from the community.
SQL support only DML (e.g. SELECT and INSERT statements). In its
form, Flink SQL users still need to define/create table sources and
programmatically in Java/Scala. Also, in SQL Client, without DDL
the current implementation does not allow dynamical creation of
or functions with SQL, this adds friction for its adoption.
I drafted a design doc  with a few other community members that
the design and implementation for adding DDL support in Flink. The
design considers DDL for table, view, type, library and function.
be great to get feedback on the design from the community, and
latest effort in unified SQL connector API  and Flink Hive
Any feedback is highly appreciated.
"So you have to trust that the dots will somehow connect in your