Jark Wu created FLINK-19610:
---
Summary: Support streaming window TopN in planner
Key: FLINK-19610
URL: https://issues.apache.org/jira/browse/FLINK-19610
Project: Flink
Issue Type: Sub-task
Jark Wu created FLINK-19609:
---
Summary: Support streaming window join in planner
Key: FLINK-19609
URL: https://issues.apache.org/jira/browse/FLINK-19609
Project: Flink
Issue Type: Sub-task
Jark Wu created FLINK-19608:
---
Summary: Support window TVF based window aggreagte in planner
Key: FLINK-19608
URL: https://issues.apache.org/jira/browse/FLINK-19608
Project: Flink
Issue Type: Sub
Jark Wu created FLINK-19606:
---
Summary: Implement streaming window join operator
Key: FLINK-19606
URL: https://issues.apache.org/jira/browse/FLINK-19606
Project: Flink
Issue Type: Sub-task
Jark Wu created FLINK-19607:
---
Summary: Implement streaming window TopN operator
Key: FLINK-19607
URL: https://issues.apache.org/jira/browse/FLINK-19607
Project: Flink
Issue Type: Sub-task
Jark Wu created FLINK-19605:
---
Summary: Implement cumulative windowing for window aggregate
operator
Key: FLINK-19605
URL: https://issues.apache.org/jira/browse/FLINK-19605
Project: Flink
Issue
Jark Wu created FLINK-19604:
---
Summary: FLIP-145: Support SQL windowing table-valued function
Key: FLINK-19604
URL: https://issues.apache.org/jira/browse/FLINK-19604
Project: Flink
Issue Type: New
>
> >
> >
> > Best,
> > Hailong Wang
> > At 2020-10-10 17:06:07, "Jark Wu" wrote:
> > >Hi all,
> > >
> > >I would like to start the vote for FLIP-145 [1], which is discussed and
> > >reached consensus in the discussion th
+1
On Mon, 12 Oct 2020 at 17:14, Yu Li wrote:
> +1
>
> Best Regards,
> Yu
>
>
> On Mon, 12 Oct 2020 at 14:41, Congxian Qiu wrote:
>
> > I would like to start a voting thread for adding translation
> specification
> > for Stateful Functions, which we’ve reached consensus in [1].
> >
> >
> >
+1
On Sat, 10 Oct 2020 at 18:41, Benchao Li wrote:
> +1
>
> Jark Wu 于2020年10月10日周六 下午6:06写道:
>
> > Hi all,
> >
> > I would like to start the vote for FLIP-145 [1], which is discussed and
> > reached consensus in the discussion thread [2].
> >
>
Hi all,
I would like to start the vote for FLIP-145 [1], which is discussed and
reached consensus in the discussion thread [2].
The vote will be open until 13th Oct. (72h), unless there is an objection
or not enough votes.
Best,
Jark
[1]:
Hi everyone,
Thanks everyone for this healthy discussion. I think we have addressed all
the concerns. I would continue with a voting.
If you have any new objections, feel free to let me know.
Best,
Jark
On Sat, 10 Oct 2020 at 17:54, Jark Wu wrote:
> Hi Jingsong,
>
> That's a good qu
raditional batch SQL, as long as the window related
> attributes are included in the key.
>
> I am not sure about the CUMULATE window, yes, It's a common requirement, Is
> there any more evidence (other systems) to prove this word ("CUMULATE") is
> appropriate.
>
> Best,
olymorphic table functions are table functions which just append
additional columns and convert N rows into M rows, it can't touch meta
information.
Best,
Jark
On Sat, 10 Oct 2020 at 15:41, Jark Wu wrote:
> Hi Danny,
>
> Thanks for the hint about named params syntax, I added examples
on usage of
> window operators and
>streaming operators. And I think we can achieve this with little work.
>
> Best,
> Pengcheng
>
>
> Jark Wu 于2020年10月10日周六 下午1:45写道:
>
>> Hi Benchao,
>>
>> That's a good question.
>>
>> IMO, the new windowed ope
TVFs.
> However there are some other places where need time attribute:
> - CEP
> - interval join
> - order by
> - over window
> If there is no time attribute column, how do we integrate these old
> features with the new TVFs.
> E.g.
> StreamA -> new window aggrega
functions to infer or
> access the window properties.
>
> For the `grouping sets` on TVFs, I think it's interesting if we
> can support it, as we already supported `grouping sets`
> on streaming aggregates in blink planner. But I'm not sure if it
> will be included int
e. That
> would conflict with the 1.11 patch you suggested. Let me know if you think
> I should make the default true in the SQL API.
>
>
>
> https://github.com/apache/flink/pull/13570
>
>
>
> Regards,
>
> Dylan
>
>
>
> *From: *Jark Wu
> *Date: *Thursda
Thanks for driving the discussion Congxian!
I'm +1 for adding translation specifications for stateful functions.
What about merging the proposed translation specifications into the
existing document [1]?
It seems the new specifications only add some terminology translation for
the glossary.
Most
Hi all,
I know we have a lot of discussion and development on going right now but
it would be great if we can get FLIP-145 into a votable state.
If there are no objections, I would like to start voting in the next days.
Best,
Jark
On Thu, 1 Oct 2020 at 14:29, Jark Wu wrote:
> Hi every
Hi Dylan,
Sorry for the late reply. We've just come back from a long holiday.
Thanks for reporting this problem. First, I think this is a bug that
`autoCommit` is false by default (JdbcRowDataInputFormat.Builder).
We can fix the default to true in 1.11 series, and I think this can solve
your
Hi,
This exception is because you don't have proper HBase dependency in your
cluster.
Please refer to the HBase documentation about how to add HBase dependency
jars to HADOOP_CLASSPATH.
Flink will load all jars under HADOOP_CLASSPATH automatically.
Btw, dev mailing list is used for discussing
the performance to make it powerful and useful.
Best,
Jark
On Thu, 1 Oct 2020 at 14:28, Jark Wu wrote:
> Hi Pengcheng,
>
> Yes, the window TVF is part of the FLIP. Welcome to contribute and join
> the discussion.
> Regarding the SESSION window aggregation, users can use the existing
&g
out of the FLIP now to catch up 1.12.
>
> Recently, I've done some work on the stream planner with the TVFs,
> and I'm willing to contribute
> to this part. Is it in the plan of this FLIP?
>
> Best,
> PengchengLiu
>
>
> 在 2020/9/26
I've also opened an jira that is related to this feature recently:
> https://issues.apache.org/jira/browse/FLINK-18830
>
> Best!
> PengchengLiu
>
> 在 2020/9/25 下午10:30,“Jark Wu” 写入:
>
> Hi everyone,
>
> I want to start a FLIP about supporting windowi
Hi everyone,
I want to start a FLIP about supporting windowing table-valued functions
(TVF).
The main purpose of this FLIP is to improve the near real-time (NRT)
experience of Flink.
FLIP-145:
+1 (binding)
Best,
Jark
On Thu, 24 Sep 2020 at 16:22, Jingsong Li wrote:
> +1 (binding)
>
> Best,
> Jingsong
>
> On Thu, Sep 24, 2020 at 4:18 PM Kurt Young wrote:
>
> > +1 (binding)
> >
> > Best,
> > Kurt
> >
> >
> > On Thu, Sep 24, 2020 at 4:01 PM Timo Walther wrote:
> >
> > > Hi all,
> > >
+1 to move it there.
On Thu, 24 Sep 2020 at 12:16, Jingsong Li wrote:
> Hi devs and users:
>
> After the 1.11 release, I heard some voices recently: How can't Hive's
> documents be found in the "Table & SQL Connectors".
>
> Actually, Hive's documents are in the "Table API & SQL". Since the
e to make Row an interface and have concrete row
> implementations for different purposes but this would break existing
> programs too much.
>
> What do you think?
>
> Regards,
> Timo
>
>
> On 18.09.20 11:04, Jark Wu wrote:
> > Personally I think the fieldNames
Hi Husky,
Module is a mechanism to support built-in functions which should always be
in the classpath.
So I'm afraid it may conflict with the current mechanism to support dynamic
loading for modules.
IIUC, what you want is the `CREATE FUNCTION ... USING JAR` which is
discussed in FLINK-14055
Jark Wu created FLINK-19298:
---
Summary: Maven enforce goal dependency-convergence failed on
flink-json
Key: FLINK-19298
URL: https://issues.apache.org/jira/browse/FLINK-19298
Project: Flink
Issue
Hi Kosma,
Thanks for the proposal. I like it and we also have supported similar
syntax in our company.
The problem is that Flink SQL leverages Calcite as the query parser, so if
we want to support this syntax, we may have to push this syntax back to the
Calcite community.
Besides, the SQL
Since FLIP-95, the parallelism is decoupled from the runtime class
(DataStream/SourceFunction),
so we need to have an API to tell the planner what the parallelism of the
source/sink is.
This is indeed the purpose of a previous discussion: [DISCUSS] Introduce
SupportsParallelismReport and
Personally I think the fieldNames Map is confusing and not handy.
I just have an idea but not sure what you think.
What about adding a new constructor with List field names, this enables all
name-based setter/getters.
Regarding to List -> Map cost for every record, we can suggest users to
reuse
Yes. We should update the "version" in docs/_config.yml in release-1.11
branch.
Best,
Jark
On Fri, 18 Sep 2020 at 14:00, Chesnay Schepler wrote:
> > Do we need to change this site version after releasing minor/bugfix
>
> versions?
>
> yes.
>
>
> On 9/18/2020 7:51 AM, Jingsong Li wrote:
> > Hi
Jark Wu created FLINK-19280:
---
Summary: The option "sink.buffer-flush.max-rows" for JDBC can't be
disabled by set to zero
Key: FLINK-19280
URL: https://issues.apache.org/jira/browse/FLINK-19280
>> The other part looks really good to me.
> >> Regards,
> >> Leonard
> >>> 在 2020年9月10日,18:26,Aljoscha Krettek 写道:
> >>>
> >>> I've only been watching this from the sidelines but that latest
> proposal looks very good to me!
> >>&
+1 (binding)
Best,
Jark
On Tue, 15 Sep 2020 at 10:32, Leonard Xu wrote:
> +1(non-binding)
>
> Leonard
>
> > 在 2020年9月12日,21:46,Danny Chan 写道:
> >
> > +1, non-binding ~
> >
> > Konstantin Knauf 于2020年9月11日 周五上午2:04写道:
> >
> >> +1 (binding)
> >>
> >>
> >>
> >> On Thu, Sep 10, 2020 at 4:29 PM
Jark Wu created FLINK-19258:
---
Summary: Fix the wrong example of "csv.line-delimiter" in CSV
documentation
Key: FLINK-19258
URL: https://issues.apache.org/jira/browse/FLINK-19258
Proj
Godfrey for becoming a Flink committer!
Cheers,
Jark Wu
Congratulations Yun!
On Wed, 16 Sep 2020 at 10:40, Rui Li wrote:
> Congratulations Yun!
>
> On Wed, Sep 16, 2020 at 10:20 AM Paul Lam wrote:
>
> > Congrats, Yun! Well deserved!
> >
> > Best,
> > Paul Lam
> >
> > > 2020年9月15日 19:14,Yang Wang 写道:
> > >
> > > Congratulations, Yun!
> > >
> > >
the the most common case as short at just
> > > adding `METADATA` is a very good idea. Thanks, Danny!
> > >
> > > Let me update the FLIP again with all these ideas.
> > >
> > > Regards,
> > > Timo
> > >
> > >
> > > On 09.09.20 15:0
> >> But I see that the discussion leans towards:
> >>
> >> timestamp INT SYSTEM_METADATA("ts")
> >>
> >> Which is fine with me. It is the shortest solution, because we don't
> >> need additional CAST. We can discuss the syntax, so that confus
rying to solve everything via properties sounds rather like a hack to
> > > me. You are right that one could argue that "timestamp", "headers" are
> > > something like "key" and "value". However, mixing
> > >
> > > `offset AS SYST
I prefer to have separate APIs for them as changelog stream requires Row
type.
It would make the API more straightforward and reduce the confusion.
Best,
Jark
On Wed, 9 Sep 2020 at 16:21, Timo Walther wrote:
> I had this in the inital design, but Jark had concerns at least for the
>
-ver15
> >> <
> >>
> https://docs.microsoft.com/en-US/sql/t-sql/statements/alter-table-computed-column-definition-transact-sql?view=sql-server-ver15
> >>>
> >>> [2] https://www.postgresql.org/docs/12/ddl-generated-columns.html <
> >> ht
Hi,
I'm +1 to use the naming of from/toDataStream, rather than
from/toInsertStream. So we don't need to deprecate the existing
`fromDataStream`.
I'm +1 to Danny's proposal: fromDataStream(dataStream, Schema) and
toDataStream(table, AbstractDataType)
I think we can also keep the method
rtsWatermarkPushDown includes the functionality of the other
> interface.
>
> What do you think?
>
> Regards,
> Timo
>
>
> On 08.09.20 04:32, Jark Wu wrote:
> > Thanks to Shengkai for summarizing the problems on the FLIP-95 interfaces
> > and solutions.
>
t one question.
> >>>>
> >>>> 4. Can we make the value.fields-include more orthogonal? Like one can
> >>>> specify it as "EXCEPT_KEY, EXCEPT_TIMESTAMP".
> >>>> With current EXCEPT_KEY and EXCEPT_KEY_TIMESTAMP, users can not
>
> I went through the list of current connectors and formats. I updated the
> FLIP for the Kafka and Debezium. For the key design, I used the FLIP-122
> naming schema. For HBase, Elasticsearch and others I could not find
> metadata that might be important for users.
>
> 4. "sub-expressi
Thanks to Shengkai for summarizing the problems on the FLIP-95 interfaces
and solutions.
I think the new proposal, i.e. only pushing the "WatermarkStrategy" is much
cleaner and easier to develop than before.
So I'm +1 to the proposal.
Best,
Jark
On Sat, 5 Sep 2020 at 13:44, Shengkai Fang
T)
And we will push down only one "headers" metadata, right?
Best,
Jark
On Mon, 7 Sep 2020 at 19:55, Jark Wu wrote:
> Thanks Timo,
>
> I think this FLIP is already in great shape!
>
> I have following questions:
>
> 1. `Map listReadableMetadata()` only allows on
Thanks Timo,
I think this FLIP is already in great shape!
I have following questions:
1. `Map listReadableMetadata()` only allows one possible
DataType for a metadata key.
However, users may expect to use different types, e.g. for "timestamp"
metadata, users may use it as BIGINT, or
Jark Wu created FLINK-19128:
---
Summary: Remove the runtime execution configuration in
sql-client-defaults.yaml
Key: FLINK-19128
URL: https://issues.apache.org/jira/browse/FLINK-19128
Project: Flink
Hi Timo,
1. "fromDataStream VS fromInsertStream"
In terms of naming, personally, I prefer `fromDataStream`,
`fromChangelogStream`, `toDataStream`, `toChangelogStream` than
`fromInsertStream`, `toInsertStream`.
2. "fromDataStream(DataStream, Expression...) VS
0 because first usage of "f1"
> > row.setField("f0", "ABC"); // position 1 because first usage of "f0"
> > row.setField("f1", "ABC"); // position 0 because second usage of "f1"
> >
> > Row row = new Row(2)
> > Maybe I leaked to many implementation details there that rather
> > confuse readers than help. Internally, we need to distinguish between
> > two kinds of rows. A user should not be bothered by this.
> >
> > a) Row comes from Table API runtime: hasFie
Hi Timo,
Thanks a lot for the great proposal and sorry for the late reply. This is
an important improvement for DataStream and Table API users.
I have listed my thoughts and questions below ;-)
## Conversion of DataStream to Table
1. "We limit the usage of `system_rowtime()/system_proctime` to
Congratulations Dian!
Best,
Jark
On Thu, 27 Aug 2020 at 19:37, Leonard Xu wrote:
> Congrats, Dian! Well deserved.
>
> Best
> Leonard
>
> > 在 2020年8月27日,19:34,Kurt Young 写道:
> >
> > Congratulations Dian!
> >
> > Best,
> > Kurt
> >
> >
> > On Thu, Aug 27, 2020 at 7:28 PM Rui Li wrote:
> >
>
Jark Wu created FLINK-19059:
---
Summary: Support to consume retractions for OVER WINDOW operator
Key: FLINK-19059
URL: https://issues.apache.org/jira/browse/FLINK-19059
Project: Flink
Issue Type
Welcome Kartik and Muhammad! Thanks in advance for helping improve Flink
documentation.
Best,
Jark
On Thu, 27 Aug 2020 at 03:59, Till Rohrmann wrote:
> Welcome Muhammad and Kartik! Thanks a lot for helping us with improving
> Flink's documentation.
>
> Cheers,
> Till
>
> On Wed, Aug 26, 2020
Hi,
I'm wondering if we always fallback to using SPI for temporary tables, then
how does the create Hive temporary table using Hive dialect work?
IMO, adding an "isTemporary" to the factory context sounds reasonable to
me, because the factory context should describe the full content of create
Thanks Leonard!
+1 to the FLIP.
Best,
Jark
On Tue, 25 Aug 2020 at 01:41, Fabian Hueske wrote:
> Leonard, Thanks for updating the FLIP!
>
> +1 to the current version.
>
> Thanks, Fabian
>
> Am Mo., 24. Aug. 2020 um 17:56 Uhr schrieb Leonard Xu :
>
>> Hi all,
>>
>> I would like to start the
Jark Wu created FLINK-19030:
---
Summary: Support lookup source for filesystem connector
Key: FLINK-19030
URL: https://issues.apache.org/jira/browse/FLINK-19030
Project: Flink
Issue Type: New Feature
ot; to be "a table is
>> changing over time", the same as the "dynamic table".
>>
>
> I'd prefer not to introduce another term if it has the same meaning as
> "dynamic table".
>
> I wonder whether it's possible to consider "temporal join"
gt;
>>> Hi, Timo
>>>
>>> Thanks for you response.
>>>
>>> 1) Naming: Is operation time a good term for this concept? If I
>>>
>>> read
>>>
>>> "The operation time is the time when the changes happened in system
Jark Wu created FLINK-19002:
---
Summary: Support to only read changelogs of specific database and
table for canal-json format
Key: FLINK-19002
URL: https://issues.apache.org/jira/browse/FLINK-19002
Project
Jark Wu created FLINK-18938:
---
Summary: Throw better exception message for quering sink-only
connector
Key: FLINK-18938
URL: https://issues.apache.org/jira/browse/FLINK-18938
Project: Flink
Issue
Jark Wu created FLINK-18897:
---
Summary: Add documentation for the maxwell-json format
Key: FLINK-18897
URL: https://issues.apache.org/jira/browse/FLINK-18897
Project: Flink
Issue Type: Sub-task
I'm +1 to add HBase 2.x
However, I have some concerns about moving HBase 1.x to Bahir:
1) As discussed above, there are still lots of people using HBase 1.x.
2) Bahir doesn't have the infrastructure to run the existing HBase E2E
tests.
3) We also paid lots of effort to provide an uber connector
Jark Wu created FLINK-18846:
---
Summary: Set a meaningful operator name for the filesystem sink
Key: FLINK-18846
URL: https://issues.apache.org/jira/browse/FLINK-18846
Project: Flink
Issue Type
Jark Wu created FLINK-18826:
---
Summary: Support to emit and encode upsert messages to Kafka
Key: FLINK-18826
URL: https://issues.apache.org/jira/browse/FLINK-18826
Project: Flink
Issue Type: Sub
Jark Wu created FLINK-18825:
---
Summary: Support to process CDC message in batch mode
Key: FLINK-18825
URL: https://issues.apache.org/jira/browse/FLINK-18825
Project: Flink
Issue Type: Sub-task
Jark Wu created FLINK-18824:
---
Summary: Support serialization for canal-json format
Key: FLINK-18824
URL: https://issues.apache.org/jira/browse/FLINK-18824
Project: Flink
Issue Type: Sub-task
Jark Wu created FLINK-18823:
---
Summary: Support serialization for debezium-json format
Key: FLINK-18823
URL: https://issues.apache.org/jira/browse/FLINK-18823
Project: Flink
Issue Type: Sub-task
Jark Wu created FLINK-18822:
---
Summary: [umbrella] Improve and complete Change Data Capture
formats
Key: FLINK-18822
URL: https://issues.apache.org/jira/browse/FLINK-18822
Project: Flink
Issue
Thanks Leonard for the great FLIP. I think it is in very good shape.
+1 to start a vote.
Best,
Jark
On Fri, 31 Jul 2020 at 17:56, Fabian Hueske wrote:
> Hi Leonard,
>
> Thanks for this FLIP!
> Looks good from my side.
>
> Cheers, Fabian
>
> Am Do., 30. Juli 2020 um 22:15 Uhr schrieb Seth
Jark Wu created FLINK-18784:
---
Summary: Push Projection through WatermarkAssigner
Key: FLINK-18784
URL: https://issues.apache.org/jira/browse/FLINK-18784
Project: Flink
Issue Type: Sub-task
Jark Wu created FLINK-18778:
---
Summary: Support the SupportsProjectionPushDown interface for
LookupTableSource
Key: FLINK-18778
URL: https://issues.apache.org/jira/browse/FLINK-18778
Project: Flink
Jark Wu created FLINK-18779:
---
Summary: Support the SupportsFilterPushDown interface for
ScanTableSource.
Key: FLINK-18779
URL: https://issues.apache.org/jira/browse/FLINK-18779
Project: Flink
Jark Wu created FLINK-18774:
---
Summary: Support debezium-avro format
Key: FLINK-18774
URL: https://issues.apache.org/jira/browse/FLINK-18774
Project: Flink
Issue Type: New Feature
Jark Wu created FLINK-18756:
---
Summary: Support IF NOT EXISTS for CREATE TABLE statement
Key: FLINK-18756
URL: https://issues.apache.org/jira/browse/FLINK-18756
Project: Flink
Issue Type: New
Hi XiaChang,
Dev mailing list is used for discussing technical designs and proposals.
Please ask user questions in u...@flink.apache.org or
user...@flink.apache.org mailing list.
Thanks,
Jark
On Wed, 29 Jul 2020 at 15:30, Wei Zhong wrote:
> Hi XiaChang,
>
> I think this is a bug. Others have
> > Best
> > Leonard
> > > 在 2020年7月27日,15:32,jincheng sun 写道:
> > >
> > > +1(binding)
> > >
> > > Best,
> > > Jincheng
> > >
> > >
> > > Jark Wu 于2020年7月27日周一 上午10:01写道:
> > >
>
Jark Wu created FLINK-18730:
---
Summary: Remove Beta tag from SQL Client docs
Key: FLINK-18730
URL: https://issues.apache.org/jira/browse/FLINK-18730
Project: Flink
Issue Type: Task
+1 to option#1.
I think it makes sense to enhance the datagen connector.
In this case, I think we can support the default TIMESTAMP generation
strategy as "sequence" with an optional start point.
This strategy can be changed to "constant", "random", or others.
This would be really helpful and
+1 (binding)
On Fri, 24 Jul 2020 at 22:22, Timo Walther wrote:
> +1
>
> Thanks for driving this Jark.
>
> Regards,
> Timo
>
> On 24.07.20 12:42, Jark Wu wrote:
> > Hi all,
> >
> > I would like to start the vote for FLIP-129 [1], which is discussed a
Jark Wu created FLINK-18705:
---
Summary: Debezium-JSON throws NPE when tombstone message is
received
Key: FLINK-18705
URL: https://issues.apache.org/jira/browse/FLINK-18705
Project: Flink
Issue
Hi all,
I would like to start the vote for FLIP-129 [1], which is discussed and
reached consensus in the discussion thread [2].
The vote will be open until 27th July (72h), unless there is an objection
or not enough votes.
Best,
Jark
[1]:
Jark Wu created FLINK-18701:
---
Summary: NOT NULL constraint is not guaranteed when aggregation
split is enabled
Key: FLINK-18701
URL: https://issues.apache.org/jira/browse/FLINK-18701
Project: Flink
ate.
>
> The latest FLIP looks well.
>
> I like Dawid’s proposal of TableDescriptor.
>
> Best
> Leonard Xu
>
> 在 2020年7月23日,22:56,Jark Wu 写道:
>
> Hi Timo,
>
> That's a good point I missed in the design. I have updated the FLIP and
> added a note un
>
> Best,
>
> Dawid
>
> On 23/07/2020 10:35, Timo Walther wrote:
>
> Hi Jark,
>
> thanks for the update. I think the FLIP is in a really good shape now and
> ready to be voted. If others have no further comments?
>
> I have one last comment around the methods of the
Jark Wu created FLINK-18674:
---
Summary: Support to bridge Transformation (DataStream) with
FLIP-95 interface?
Key: FLINK-18674
URL: https://issues.apache.org/jira/browse/FLINK-18674
Project: Flink
er+connectors+in+Table+API#FLIP129:RefactorDescriptorAPItoregisterconnectorsinTableAPI-FormatDescriptor
POC:
https://github.com/wuchong/flink/tree/descriptor-POC/flink-table/flink-table-common/src/main/java/org/apache/flink/table/descriptor3
Best,
Jark
On Thu, 16 Jul 2020 at 20:18, Jark Wu
Congratulations! Thanks Dian for the great work and to be the release
manager!
Best,
Jark
On Wed, 22 Jul 2020 at 15:45, Yangze Guo wrote:
> Congrats!
>
> Thanks Dian Fu for being release manager, and everyone involved!
>
> Best,
> Yangze Guo
>
> On Wed, Jul 22, 2020 at 3:14 PM Wei Zhong
Jark Wu created FLINK-18665:
---
Summary: Filesystem connector should use TableSchema exclude
computed columns
Key: FLINK-18665
URL: https://issues.apache.org/jira/browse/FLINK-18665
Project: Flink
Thanks Dawid for the link. I have a glance at the PR.
I think we can continue the thrift format based on the PR (would be better
to reach out to the author).
Best,
Jark
On Tue, 21 Jul 2020 at 15:58, Dawid Wysakowicz
wrote:
> Hi,
>
> I've just spotted this PR that might be helpful in the
Jark Wu created FLINK-18654:
---
Summary: Correct missleading documentation in "Partitioned Scan"
section of JDBC connector
Key: FLINK-18654
URL: https://issues.apache.org/jira/browse/FLINK-18654
Hi Chen,
Your listed items sound great to me. I think we can start from the thrift
format, could you open an issue for it?
The community also planned to support PB format in the next version, maybe
can work together.
Deriving table schema out of thrift struct is also an interesting topic,
and is
Thanks Dian for kicking off the RC.
+1 from my side:
I heavily tested CDC use cases end-to-end and it works well.
- checked/verified signatures and hashes
- manually verified the diff pom and NOTICE files between 1.11.0 and 1.11.1
to check dependencies, looks good
- no missing artifacts in
601 - 700 of 1588 matches
Mail list logo