Jark Wu created FLINK-17693:
---
Summary: Add createTypeInformation to DynamicTableSink#Context
Key: FLINK-17693
URL: https://issues.apache.org/jira/browse/FLINK-17693
Project: Flink
Issue Type: Sub
Jark Wu created FLINK-17689:
---
Summary: Add integeration tests for Debezium and Canal formats
Key: FLINK-17689
URL: https://issues.apache.org/jira/browse/FLINK-17689
Project: Flink
Issue Type: Sub
Jark Wu created FLINK-17647:
---
Summary: Improve new connector options exception in old planner
Key: FLINK-17647
URL: https://issues.apache.org/jira/browse/FLINK-17647
Project: Flink
Issue Type: Sub
Jark Wu created FLINK-17633:
---
Summary: Improve FactoryUtil to align with new format options keys
Key: FLINK-17633
URL: https://issues.apache.org/jira/browse/FLINK-17633
Project: Flink
Issue Type
Jark Wu created FLINK-17630:
---
Summary: Implement format factory for Avro serialization and
deserialization schema
Key: FLINK-17630
URL: https://issues.apache.org/jira/browse/FLINK-17630
Project: Flink
Jark Wu created FLINK-17629:
---
Summary: Implement format factory for JSON serialization and
deserialization schema
Key: FLINK-17629
URL: https://issues.apache.org/jira/browse/FLINK-17629
Project: Flink
Jark Wu created FLINK-17625:
---
Summary: Fix ArrayIndexOutOfBoundsException in
AppendOnlyTopNFunction
Key: FLINK-17625
URL: https://issues.apache.org/jira/browse/FLINK-17625
Project: Flink
Issue
eplacement for the open() method. This is
> the
> > > same strategy that was followed for StreamOperatorFactory, which was
> > > introduced to allow code generation in the Table API [1]. If we need
> > > metrics or other things we would add that as a parameter to the f
Hi,
Regarding to the `open()/close()`, I think it's necessary for Table to
compile the generated code.
In Table, the watermark strategy and event-timestamp is defined using
SQL expressions, we will
translate and generate Java code for the expressions. If we have
`open()/close()`, we don't need
+1 to return emty iterator and align the implementations.
Best,
Jark
On Sat, 9 May 2020 at 19:18, SteNicholas wrote:
> Hi Tang Yun,
> I agree with the point you mentioned that align these internal
> behavior
> to return empty iterator instead of null. In my opinion,
>
Jark Wu created FLINK-17591:
---
Summary: TableEnvironmentITCase.testExecuteSqlAndToDataStream
failed
Key: FLINK-17591
URL: https://issues.apache.org/jira/browse/FLINK-17591
Project: Flink
Issue
emporal tables derived from an append-only
>> stream, we either need to support TEMPORAL VIEW (as mentioned by Fabian)
>> or
>> need to have a way to convert an append-only table into a changelog table
>> (briefly discussed in [1]). It is not completely clear to me how a
>&
Hi Lec,
You can use `StreamTableEnvironment#toRetractStream(table, Row.class)` to
get a `DataStream>`.
The true Boolean flag indicates an add message, a false flag indicates a
retract (delete) message. So you can just simply apply
a flatmap function after this to ignore the false messages. Then
ormats like parquet and orc,
> > Not just Flink itself, but this way also let Flink keys compatible with
> the
> > property keys of Hadoop / Hive / Spark.
> >
> > And like Jark said, this way works for Kafka key value too.
> >
> > Best,
> > Jing
coding style we should be consistent in Flink
> > > connectors and configuration. Because implementers of new connectors
> > > will copy the design of existing ones.
> > >
> > > Furthermore, I could image that people in the DataStream API would also
> > > like to conf
Jark Wu created FLINK-17528:
---
Summary: Use getters instead of RowData#get() utility in
JsonRowDataSerializationSchema
Key: FLINK-17528
URL: https://issues.apache.org/jira/browse/FLINK-17528
Project: Flink
Jark Wu created FLINK-17526:
---
Summary: Support AVRO serialization and deseriazation schema for
RowData type
Key: FLINK-17526
URL: https://issues.apache.org/jira/browse/FLINK-17526
Project: Flink
Jark Wu created FLINK-17525:
---
Summary: Support to parse millisecond and nanoseconds for TIME
type in CSV and JSON format
Key: FLINK-17525
URL: https://issues.apache.org/jira/browse/FLINK-17525
Project
te on each incoming record?
>
> Best,
> Andrey
>
> [1] note 2 in
>
> https://ci.apache.org/projects/flink/flink-docs-stable/dev/stream/state/state.html#incremental-cleanup
>
> On Wed, Apr 29, 2020 at 11:53 AM 刘大龙 wrote:
>
> >
> >
> >
> &g
Big +1 to this.
Best,
Jark
On Mon, 4 May 2020 at 23:44, Till Rohrmann wrote:
> Hi everyone,
>
> due to some changes on the ASF side, we are now seeing issue and pull
> request notifications for the flink-web [1] and flink-shaded [2] repo on
> dev@flink.apache.org. I think this is not ideal
g a single suffix to a
> single(at most 2 key + value) property. The question is between `format`
>
> =
>
> `json` vs `format.kind` = `json`. That said I'd be slighty in favor of
> doing it.
>
> Just to have a full picture. Both cases can be represented in yaml, but
>>>>>>>>> dml must be at tail of sql file (which may be the most case
> in
> > > >>>> product
> > > >>>>>>>>>> env).
> > > >>>>>>>>>> Otherwise the platform must parse the sql first, then know
>
Big +1 from my side.
The new structure and class names look nicer now.
Regarding to the compability problem, I have looked into the public APIs in
flink-jdbc, there are 3 kinds of APIs now:
1) new introduced JdbcSink for DataStream users in 1.11
2) JDBCAppendTableSink, JDBCUpsertTableSink,
elect a /*int*/, b /*string*/ from tableA", "insert
> > into
> > > > blackhole select a /*double*/, b /*Map*/, c /*string*/ from tableB".
> It
> > > > seems that Blackhole is a universal thing, which makes me feel bad
> > > > intuitiv
Hi,
Welcome to the community!
There is no contributor permission now, you can just comment under the JIRA
issue.
And committer will assign issue to you if no one is working on this.
Best,
Jark
On Thu, 30 Apr 2020 at 17:36, flinker wrote:
> Hi,
>
> I want to contribute to Apache Flink.
>
-mustache-client nor
> com.github.spullara.mustache.java:compiler (and thus is also not bundling
> them).
>
> You can check this yourself by packaging the connector and comparing the
> shade-plugin output with the NOTICE file.
>
> On 30/04/2020 08:55, Jark Wu wrote:
>
&g
#diff-bd2211176ab6e7fa83ffeaa89481ff38
On Thu, 30 Apr 2020 at 14:44, Chesnay Schepler wrote:
> ES6 isn't bundling these dependencies.
>
> On 29/04/2020 17:29, Jark Wu wrote:
> > Looks like the ES NOTICE problem is a long-standing problem, because the
> > ES6 sql connect
Looks like the ES NOTICE problem is a long-standing problem, because the
ES6 sql connector NOTICE also misses these dependencies.
Best,
Jark
On Wed, 29 Apr 2020 at 17:26, Robert Metzger wrote:
> Thanks for taking a look Chesnay. Then let me officially cancel the
> release:
>
> -1 (binding)
>
>
>From a user's perspective, I prefer the shorter one "format=json", because
it's more concise and straightforward. The "kind" is redundant for users.
Is there a real case requires to represent the configuration in JSON style?
As far as I can see, I don't see such requirement, and everything works
Jark Wu created FLINK-17462:
---
Summary: Support CSV serialization and deseriazation schema for
RowData type
Key: FLINK-17462
URL: https://issues.apache.org/jira/browse/FLINK-17462
Project: Flink
Jark Wu created FLINK-17461:
---
Summary: Support JSON serialization and deseriazation schema for
RowData type
Key: FLINK-17461
URL: https://issues.apache.org/jira/browse/FLINK-17461
Project: Flink
Hi lsyldliu,
Thanks for investigating this.
First of all, if you are using mini-batch deduplication, it doesn't support
state ttl in 1.9. That's why the tps looks the same with 1.11 disable state
ttl.
We just introduce state ttl for mini-batch deduplication recently.
Regarding to the
Jark Wu created FLINK-17437:
---
Summary: Use StringData instead of BinaryStringData in code
generation
Key: FLINK-17437
URL: https://issues.apache.org/jira/browse/FLINK-17437
Project: Flink
Issue
Jark Wu created FLINK-17430:
---
Summary: Support SupportsPartitioning in planner
Key: FLINK-17430
URL: https://issues.apache.org/jira/browse/FLINK-17430
Project: Flink
Issue Type: Sub-task
Jark Wu created FLINK-17429:
---
Summary: Support SupportsOverwrite in planner
Key: FLINK-17429
URL: https://issues.apache.org/jira/browse/FLINK-17429
Project: Flink
Issue Type: Sub-task
Jark Wu created FLINK-17428:
---
Summary: Support SupportsProjectionPushDown in planner
Key: FLINK-17428
URL: https://issues.apache.org/jira/browse/FLINK-17428
Project: Flink
Issue Type: Sub-task
Jark Wu created FLINK-17427:
---
Summary: Support SupportsPartitionPushDown in planner
Key: FLINK-17427
URL: https://issues.apache.org/jira/browse/FLINK-17427
Project: Flink
Issue Type: Sub-task
Jark Wu created FLINK-17426:
---
Summary: Support SupportsLimitPushDown in planner
Key: FLINK-17426
URL: https://issues.apache.org/jira/browse/FLINK-17426
Project: Flink
Issue Type: Sub-task
Jark Wu created FLINK-17425:
---
Summary: Supports SupportsFilterPushDown in planner
Key: FLINK-17425
URL: https://issues.apache.org/jira/browse/FLINK-17425
Project: Flink
Issue Type: Sub-task
gt; information in FLINK-11286, but in general I'd be supportive with defining
> watermark as close as possible from source, as it'll be easier to reason
> about. (I basically refer to timestamp assigner instead of watermark
> assigner though.)
>
> - Jungtaek Lim
>
> On Tue, Apr 28,
Hi Jungtaek,
Kurt has said what I want to say. I will add some background.
Flink Table API & SQL only supports to define processing-time attribute and
event-time attribute (watermark) on source, not support to define a new one
in query.
The time attributes will pass through the query and
+1 for xyz.[min|max]
This is already mentioned in the Code Style Guideline [1].
Best,
Jark
[1]:
https://flink.apache.org/contributing/code-style-and-quality-components.html#configuration-changes
On Mon, 27 Apr 2020 at 21:33, Flavio Pompermaier
wrote:
> +1 for Chesnay approach
>
> On Mon,
Thanks Dian for being the release manager and thanks all who make this
possible.
Best,
Jark
On Sun, 26 Apr 2020 at 18:06, Leonard Xu wrote:
> Thanks Dian for the release and being the release manager !
>
> Best,
> Leonard Xu
>
>
> 在 2020年4月26日,17:58,Benchao Li 写道:
>
> Thanks Dian for the
Jark Wu created FLINK-17385:
---
Summary: Fix precision problem when converting JDBC numberic into
Flink decimal type
Key: FLINK-17385
URL: https://issues.apache.org/jira/browse/FLINK-17385
Project: Flink
+1
Thanks,
Jark
On Thu, 23 Apr 2020 at 22:36, Xintong Song wrote:
> +1
> From our side we can also benefit from the extending of feature freeze, for
> pluggable slot allocation, GPU support and perjob mode on Kubernetes
> deployment.
>
> Thank you~
>
> Xintong Song
>
>
>
> On Thu, Apr 23, 2020
Jark Wu created FLINK-17337:
---
Summary: Send UPDATE messages instead of INSERT and DELETE in
streaming join operator
Key: FLINK-17337
URL: https://issues.apache.org/jira/browse/FLINK-17337
Project: Flink
E and TEMPORAL VIEW
> would be a nice-to-have feature for some later time.
>
> Cheers, Fabian
>
>
>
>
>
>
> Am Fr., 17. Apr. 2020 um 18:13 Uhr schrieb Jark Wu :
>
> > Hi Fabian,
> >
> > I think converting an append-only table into temporal
ng time attribute [2].
> > > >
> > > > ## 2.2 the temporal table is a TableFunction and parameterized in the
> > > query
> > > > (see 1.3.1 above)
> > > >
> > > > SELECT *
> > > > FROM orders o,
> > > >
Congratulations Hequn!
Best,
Jark
On Fri, 17 Apr 2020 at 15:32, Yangze Guo wrote:
> Congratulations!
>
> Best,
> Yangze Guo
>
> On Fri, Apr 17, 2020 at 3:19 PM Jeff Zhang wrote:
> >
> > Congratulations, Hequn!
> >
> > Paul Lam 于2020年4月17日周五 下午3:02写道:
> >
> > > Congrats Hequn! Thanks a lot
Hi Konstantin,
Thanks for bringing this discussion. I think temporal join is a very
important feature and should be exposed to pure SQL users.
And I already received many requirements like this.
However, my concern is that how to properly support this feature in SQL.
Introducing a DDL syntax for
the DDLs,
> > >
> > > So I think it could be good to place connector/format jars in some
> > > dir like
> > > opt/connector which would not affect jobs by default, and introduce a
> > > mechanism of dynamic discovery for SQL.
> > >
+1 (binding)
Thanks Dawid for driving this.
Best,
Jark
On Thu, 16 Apr 2020 at 15:54, Dawid Wysakowicz
wrote:
> Hi all,
>
> I would like to start the vote for FLIP-124 [1], which is discussed and
> reached a consensus in the discussion thread [2].
>
> The vote will be open until April 20th,
+1 for releasing 1.9.3 soon.
Thanks Dian for driving this!
Best,
Jark
On Wed, 15 Apr 2020 at 22:11, Congxian Qiu wrote:
> +1 to create a new 1.9 bugfix release. and FLINK-16576[1] has merged into
> master, filed a pr for release-1.9 already
>
> [1]
Jark Wu created FLINK-17169:
---
Summary: Refactor BaseRow to use RowKind instead of byte header
Key: FLINK-17169
URL: https://issues.apache.org/jira/browse/FLINK-17169
Project: Flink
Issue Type: Sub
.
> >>> This will improve user experience (special for Flink new users).
> >>> We answered so many questions about "class not found".
> >>>
> >>> Best,
> >>> Godfrey
> >>>
> >>> Dian Fu 于2020年4月15日周三 下午
gt; > metaspace
> > >> >>>> > > > > > increase is more likely to cause problem.
> > >> >>>> > > > > >
> > >> >>>> > > > > > So basically only people have small 'process.
Jark Wu created FLINK-17157:
---
Summary: TaskMailboxProcessorTest.testIdleTime failed on travis
Key: FLINK-17157
URL: https://issues.apache.org/jira/browse/FLINK-17157
Project: Flink
Issue Type: Bug
+1 to the proposal. I also found the "download additional jar" step is
really verbose when I prepare webinars.
At least, I think the flink-csv and flink-json should in the distribution,
they are quite small and don't have other dependencies.
Best,
Jark
On Wed, 15 Apr 2020 at 15:44, Jeff Zhang
Jark Wu created FLINK-17150:
---
Summary: Introduce Canal format to support reading canal changelogs
Key: FLINK-17150
URL: https://issues.apache.org/jira/browse/FLINK-17150
Project: Flink
Issue Type
Jark Wu created FLINK-17149:
---
Summary: Introduce Debezium format to support reading debezium
changelogs
Key: FLINK-17149
URL: https://issues.apache.org/jira/browse/FLINK-17149
Project: Flink
Hi all,
The voting time for FLIP-105 has passed. I'm closing the vote now.
There were 5 +1 votes, 3 of which are binding:
- Benchao (non-binding)
- Jark (binding)
- Jingsong Li (binding)
- zoudan (non-binding)
- Kurt (binding)
There were no disapproving votes.
Thus, FLIP-105 has been
+1 (binding)
Best,
Jark
On Sun, 12 Apr 2020 at 09:24, Benchao Li wrote:
> +1 (non-binding)
>
> Jark Wu 于2020年4月11日周六 上午11:31写道:
>
> > Hi all,
> >
> > I would like to start the vote for FLIP-105 [1], which is discussed and
> > reached a consensus in the dis
+1
Best,
Jark
On Sun, 12 Apr 2020 at 12:28, Benchao Li wrote:
> +1 (non-binding)
>
> zoudan 于2020年4月12日周日 上午9:52写道:
>
> > +1 (non-binding)
> >
> > Best,
> > Dan Zou
> >
> >
> > > 在 2020年4月10日,09:30,Danny Chan 写道:
> > >
> > > +1 from my side.
> > >
> > > Best,
> > > Danny Chan
> > > 在
Hi all,
I would like to start the vote for FLIP-105 [1], which is discussed and
reached a consensus in the discussion thread [2].
The vote will be open for at least 72h, unless there is an objection or not
enough votes.
Thanks,
Jark
[1]
Sorry for the late reply,
I have some concern around "Supporting SHOW VIEWS|DESCRIBE VIEW name".
Currently, in SQL CLI, the "SHOW TABLES" will also list views and "DESCRIBE
name" can also describe a view.
Shall we remove the view support in those commands if we want to support a
dedicate "SHOW
Hi Xiaogang,
I think this proposal doesn't conflict with your use case, you can still
chain a ProcessFunction after a source which emits raw data.
But I'm not in favor of chaining ProcessFunction after source, and we
should avoid that, because:
1) For correctness, it is necessary to perform the
+1 from my side (binding)
Best,
Jark
On Fri, 10 Apr 2020 at 17:03, Timo Walther wrote:
> +1 (binding)
>
> Thanks for the healthy discussion. I think this feature can be useful
> during the development of a pipeline.
>
> Regards,
> Timo
>
> On 10.04.20 03:34, Danny Chan wrote:
> > Hi all,
> >
>
n't find a good name for separate option keys, because JSON is also
a format, not an encoding, but `format.format=json` is weird.
Hi everyone,
If there are no further concerns, I would like to start a voting thread by
tomorrow.
Best,
Jark
On Wed, 8 Apr 2020 at 15:37, Jark Wu wrote:
> Hi Kurt
Thanks Yun,
This's a great feature! I was surprised by the autolink feature yesterday
(didn't know your work at that time).
Best,
Jark
On Thu, 9 Apr 2020 at 16:12, Yun Tang wrote:
> Hi community
>
> The autolink to Flink JIRA ticket has taken effect. You could refer to the
> commit details
`table.dynamic-table-options.enabled` and `TableConfigOptions` sounds good
to me.
Best,
Jark
On Wed, 8 Apr 2020 at 18:59, Danny Chan wrote:
> `table.dynamic-table-options.enabled` seems fine to me, I would make a new
> `TableConfigOptions` class and put the config option there ~
>
> What do
ies to canal.
>
> Best,
> Kurt
>
>
> On Tue, Apr 7, 2020 at 11:49 AM Jark Wu wrote:
>
> > Hi everyone,
> >
> > Since this FLIP was proposed, the community has discussed a lot about the
> > first approach: introducing new TableSource and TableSink interfac
e/sink
> > - There is a global config option to default disable this feature (if
> > user uses OPTIONS, an exception throws to tell open the option)
> >
> > I have updated the WIKI
> > <
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-113%3A+Supports
Jark Wu created FLINK-17028:
---
Summary: Introduce a new HBase connector with new property keys
Key: FLINK-17028
URL: https://issues.apache.org/jira/browse/FLINK-17028
Project: Flink
Issue Type: Sub
Jark Wu created FLINK-17027:
---
Summary: Introduce a new Elasticsearch connector with new property
keys
Key: FLINK-17027
URL: https://issues.apache.org/jira/browse/FLINK-17027
Project: Flink
Issue
Jark Wu created FLINK-17026:
---
Summary: Introduce a new Kafka connector with new property keys
Key: FLINK-17026
URL: https://issues.apache.org/jira/browse/FLINK-17026
Project: Flink
Issue Type: Sub
Hi all,
The voting time for FLIP-122 has passed. I'm closing the vote now.
There were 8 +1 votes, 4 of which are binding:
- Timo (binding)
- Dawid (binding)
- Benchao Li (non-binding)
- Jingsong Li (binding)
- LakeShen (non-binding)
- Leonard Xu (non-binding)
- zoudan (non-binding)
- Jark
onality required by the interface.
> Nevertheless I am happy to hear other opinions.
>
> @all I also prefer the buffering approach. Let's wait a day or two more
> to see if others think differently.
>
> Best,
>
> Dawid
>
> On 07/04/2020 06:11, Jark Wu wrote:
> &g
Jark Wu created FLINK-17015:
---
Summary: Fix NPE from NullAwareMapIterator
Key: FLINK-17015
URL: https://issues.apache.org/jira/browse/FLINK-17015
Project: Flink
Issue Type: Bug
Components
Hi Dawid,
Thanks for driving this. This is a blocker to support Debezium CDC format
(FLIP-105). So big +1 from my side.
Regarding to emitting multiple records and checkpointing, I'm also in favor
of option#1: buffer all the records outside of the checkpoint lock.
I think most of the use cases
[2]: http://apache-flink.147419.n8.nabble.com/SURVEY-CDC-td1910.html
On Fri, 14 Feb 2020 at 22:08, Jark Wu wrote:
> Hi everyone,
>
> I would like to start discussion about how to support interpreting
> external changelog into Flink SQL, and how to emit changelog from Flink SQL.
&g
I'm fine to disable this feature by default and avoid
whitelisting/blacklisting. This simplifies a lot of things.
Regarding to TableSourceFactory#Context#getExecutionOptions, do we really
need this interface?
Should the connector factory be aware of the properties is merged with
hints or not?
Hi all,
I would like to start the vote for FLIP-122 [1], which is discussed and
reached a consensus in the discussion thread [2].
The vote will be open for at least 72h, unless there is an objection or not
enough votes.
Thanks,
Timo
[1]
t; Regards,
> Timo
>
>
> On 02.04.20 14:06, Jark Wu wrote:
> > Hi Dawid,
> >
> >> How to express projections with TableSchema?
> > The TableSource holds the original TableSchema (i.e. from DDL) and the
> > pushed TableSchema represents the schema after pr
> +1(non-binding)
> >>
> >> Best,
> >> Leonard Xu
> >>
> >>> 在 2020年3月30日,16:43,Jingsong Li 写道:
> >>>
> >>> +1
> >>>
> >>> Best,
> >>> Jingsong Lee
> >>>
> >
Congratulations to you all!
Best,
Jark
On Wed, 1 Apr 2020 at 20:33, Kurt Young wrote:
> Congratulations to you all!
>
> Best,
> Kurt
>
>
> On Wed, Apr 1, 2020 at 7:41 PM Danny Chan wrote:
>
> > Congratulations!
> >
> > Best,
> > Danny Chan
> > 在 2020年4月1日 +0800 PM7:36,dev@flink.apache.org,写道:
Hi everyone,
If there are no objections, I would like to start a voting thread by
tomorrow. So this is the last call to give feedback for FLIP-122.
Cheers,
Jark
On Wed, 1 Apr 2020 at 16:30, zoudan wrote:
> Hi Jark,
> Thanks for the proposal.
> I like the idea that we put the version in
+1 to make blink planner as default planner.
We should give blink planner more exposure to encourage users trying out
new features and lead users to migrate to blink planner.
Glad to see blink planner is used in production since 1.9! @Benchao
Best,
Jark
On Wed, 1 Apr 2020 at 11:31, Benchao Li
Jark Wu created FLINK-16889:
---
Summary: Support converting BIGINT to TIMESTAMP for TO_TIMESTAMP
function
Key: FLINK-16889
URL: https://issues.apache.org/jira/browse/FLINK-16889
Project: Flink
+New+Factory
Please let me know if you have other questions.
Best,
Jark
On Wed, 1 Apr 2020 at 00:56, Jark Wu wrote:
> Hi, Dawid
>
> Regarding to `connector.property-version`,
> I totally agree with you we should implicitly add a "property-version=1"
> (without 'connect
; Thanks for the proposal. I'm +1 since it's more simple and clear for sql
> users.
> I have a question about this: does this affect descriptors and related
> validators?
>
> *Best Regards,*
> *Zhenghua Gao*
>
>
> On Mon, Mar 30, 2020 at 2:02 PM Jark Wu wrote:
>
&
+1 from my side. This will be a very useful feature.
Best,
Jark
> 2020年3月31日 18:15,Danny Chan 写道:
>
> +1 for this feature, although the WITH syntax breaks the SQL standard, but
> it’s compatible with our CREATE TABLE syntax, seems well from my side.
>
> Best,
> Danny Chan
> 在 2020年3月31日
Jark Wu created FLINK-16887:
---
Summary: Refactor retraction rules to support inferring
ChangelogMode
Key: FLINK-16887
URL: https://issues.apache.org/jira/browse/FLINK-16887
Project: Flink
Issue
-null-literal # and
it also can be used for es sources in the future
connector.bulk-flush.back-off.type => sink.bulk-flush.back-off.strategy
jdbc:
connector.table => table-name
Welcome further feedbacks!
Best,
Jark
On Tue, 31 Mar 2020 at 14:45, Jark Wu wrote:
> Hi Kurt,
>
concerns:
>
> • For the new keys, do we still need to put multi-lines there the such
> key, such as “connector.properties.abc” “connector.properties.def”, or
> should we inline them, such as “some-key-prefix” = “k1=v1, k2=v2 ..."
> • Should the ConfigOption support the wildcard ? (If we p
;>> non-blocking streaming queries. Also with the `EMIT STREAM` key word
> in
> >>>> mind that we might add to SQL statements soon.
> >>>>
> >>>> Regards,
> >>>> Timo
> >>>>
> >>>> [1] https://issues.apache.o
example, the framework should add "connector.property-version=1" to
properties when processing DDL statement.
I'm fine to add a "connector.property-version=1" when processing DDL
statement, but I think it's also fine if we don't,
because this can be done in the future if need and
gt; source, bug we cannot leave it blank for sink, and vice versa.
> >> I think we can also add a type for dimension tables except source and
> >> sink.
> >>
> >> Kurt Young mailto:ykt...@gmail.com>> 于2020年3月30日
> >> 周一 下午8:16写道:
> >>
>
t; - properties for source only: with "source." prefix, like
> "source.startup-mode"
> - properties for sink only: with "sink." prefix, like "sink.partitioner"
>
> What do you think?
>
> Best,
> Jingsong Lee
>
> On Mon,
+1 from my side.
Thanks Timo for driving this.
Best,
Jark
On Mon, 30 Mar 2020 at 15:36, Timo Walther wrote:
> Hi all,
>
> I would like to start the vote for FLIP-95 [1], which is discussed and
> reached a consensus in the discussion thread [2].
>
> The vote will be open until April 2nd (72h),
> , 'connector.properties.0.value' = 'xxx'
> > > , 'connector.properties.1.key' = 'bootstrap.servers'
> > > , 'connector.properties.1.value' = 'x'
> > >
> >
> > I can understand this config , but for the flink fresh man,mayb
801 - 900 of 1588 matches
Mail list logo