wse/FLINK-20791
>
> Jark Wu 于2021年1月7日周四 上午10:25写道:
>
>> Thanks for updating the design doc.
>> It looks good to me.
>>
>> Best,
>> Jark
>>
>> On Thu, 7 Jan 2021 at 10:16, Jingsong Li wrote:
>>
>>> Sounds good to me.
>>>
>&g
Jark Wu created FLINK-20885:
---
Summary: Exception when use 'canal-json.table.include' to filter
Canal binlog but table contains 'source' column
Key: FLINK-20885
URL: https://issues.apache.org/jira/browse/FLINK-20885
Jark Wu created FLINK-20877:
---
Summary: Refactor BytesHashMap and BytesMultiMap to support window
key
Key: FLINK-20877
URL: https://issues.apache.org/jira/browse/FLINK-20877
Project: Flink
Issue
Jark Wu created FLINK-20873:
---
Summary: Upgrade Calcite version to 1.27
Key: FLINK-20873
URL: https://issues.apache.org/jira/browse/FLINK-20873
Project: Flink
Issue Type: Improvement
gt;>private final boolean distinct;
>>
>>private final boolean approximate;
>>
>>
>>
>>private final boolean ignoreNulls;
>>
>> }
>> ```
>>
>> And we really only need one GroupingSets parameter for grouping. I have
>> updated the r
;> developers should be aware of them.Wrapping them in a context implicitly is
>> error-prone that the existing connector will produce wrong results when
>> upgrading to new Flink versions.
>>
>> We can have some mechanism to check the upgrading.
>>
>> > I thin
, for the SQL: "GROUP BY GROUPING SETS (f1,
>> f2)". Then, we need to add more information to push down.
>>
>> Best,
>> Jingsong
>>
>> On Wed, Jan 6, 2021 at 11:02 AM Jark Wu wrote:
>>
>>> I think this may be over designed. We should hav
t; > > memory management model to be unified and give the operator free space.
> > >
> > > Xingtong's proposal looks good to me. +1 to split `DATAPROC` into
> > > `STATE_BACKEND` or `OPERATOR`.
> > >
> > > Best,
> > > Jingsong
> > >
> &g
Jark Wu created FLINK-20860:
---
Summary: Allow streaming operators to use managed memory
Key: FLINK-20860
URL: https://issues.apache.org/jira/browse/FLINK-20860
Project: Flink
Issue Type: Sub-task
to just return a boolean type to indicate whether all of
>>> > aggregates push down was successful or not. [Resolved in proposal]
>>> >
>>> > For (2) Agree: The aggOutputDataType represent the produced data type
>>> of
>>> > the new table s
Jark Wu created FLINK-20854:
---
Summary: Introduce BytesMultiMap to support buffering records
Key: FLINK-20854
URL: https://issues.apache.org/jira/browse/FLINK-20854
Project: Flink
Issue Type: Sub
ave changed this parameter
>> to be producedDataType. [Resolved in proposal]
>>
>> For (3) Agree: Indeed, groupSet may mislead users, I have changed to use
>> groupingFields. [Resolved in proposal]
>>
>> Thx again for the suggestion, looking for the
system will
> throw an exception that the type does not match. We can indeed cast by get
> the schema, but I think if CatalogPartitionSpec#partitionSpec is of type
> Map, there is no need to do cast operation, and the
> universal and compatibility are better
>
> Jark Wu 于2021年1月5日周二 下午1:47
Hi Jun,
I'm curious why it doesn't work when represented in string?
You can get the field type from the CatalogTable#getSchema(),
then parse/cast the partition value to the type you want.
Best,
Jark
On Tue, 5 Jan 2021 at 13:43, Jun Zhang wrote:
> Hello dev:
> Now I encounter a problem
perators
> * `STATE_BACKEND` for state backends
> * `PYTHON` for python processes
> * `DATAPROC` as a legacy key for state backend or batch operators if
> `STATE_BACKEND` or `OPERATOR` are not specified.
>
> Thank you~
>
> Xintong Song
>
>
>
> On Tue, Jan 5, 2021 at 11:
t;>> > For (2): Agree: Change to use CallExpression is a better choice, and
>>> have
>>> > resolved this
>>> > comment in the proposal.
>>> >
>>> > For (3): I suggest we first support the JDBC connector, as we don't
>>> have
>
Thanks Dian,
+1 to `deduplicate`.
Regarding `myTable.coalesce($("a"), 1).as("a")`, I'm afraid it may
conflict/confuse the built-in expression `coalesce(f0, 0)` (we may
introduce it in the future).
Besides that, could we also align other features of Flink SQL, e.g.
event-time/processing-time
low streaming operators to use managed memory for
> other use cases.
>
> Do you think we need an additional "consumer" setting or that they would
> just use `DATAPROC` and decide by themselves what to use the memory for?
>
> Best,
> Aljoscha
>
> On 2020/12/22 17:14,
Jark Wu created FLINK-20823:
---
Summary: Update documentation to mention Table/SQL API doesn't
provide cross-major-version state compatibility
Key: FLINK-20823
URL: https://issues.apache.org/jira/browse/FLINK-20823
at 08:14, Guenther Starnberger
wrote:
> On Wed, Dec 30, 2020 at 11:21 AM Jark Wu wrote:
>
> Jark,
>
> > If you are using the old planner in 1.9, and using the old planner in
> 1.11,
> > then a state migration is
> > needed because of the new added RowKind field. T
Hi Guenther,
If you are using the old planner in 1.9, and using the old planner in 1.11,
then a state migration is
needed because of the new added RowKind field. This is documented in the
1.11 release note [1].
If you are using the old planner in 1.9, and using the blink planner in
1.11, the
ote:
> >>
> >>> Thanks a lot for this effort Aljoscha and Chesnay! Finally we have a
> >> common
> >>> code style :-)
> >>>
> >>> Cheers,
> >>> Till
> >>>
> >>> On Tue, Dec 29, 2020 at 7:
Hi Sebastian,
Thanks for the proposal. I think this is a great improvement for Flink SQL.
I went through the design doc and have following thoughts:
1) Flink has deprecated the legacy TableSource in 1.11 and proposed a new
set of DynamicTableSource interfaces. Could you update your proposal to
Thanks Aljoscha and Chesnay for the great work!
Best,
Jark
On Tue, 29 Dec 2020 at 11:11, Xintong Song wrote:
> Great! Thanks Aljoscha and Chesnay for driving this.
>
> Thank you~
>
> Xintong Song
>
>
>
> On Tue, Dec 29, 2020 at 1:28 AM Chesnay Schepler
> wrote:
>
> > Hello everyone,
> >
> > I
Hi all,
I found that currently the managed memory can only be used in 3 workloads
[1]:
- state backends for streaming jobs
- sorting, hash tables for batch jobs
- python UDFs
And the configuration option `taskmanager.memory.managed.consumer-weights`
only allows values: PYTHON and DATAPROC (state
Jark Wu created FLINK-20562:
---
Summary: Support ExplainDetails for EXPLAIN sytnax
Key: FLINK-20562
URL: https://issues.apache.org/jira/browse/FLINK-20562
Project: Flink
Issue Type: New Feature
Jark Wu created FLINK-20557:
---
Summary: Support statement set in SQL CLI
Key: FLINK-20557
URL: https://issues.apache.org/jira/browse/FLINK-20557
Project: Flink
Issue Type: New Feature
+1
This is a very good idea.
Best,
Jark
On Wed, 9 Dec 2020 at 10:43, Xingbo Huang wrote:
> +1
>
> This is a very good proposal.In release-1.12, many newly added features are
> only supported on the blink planner. For example, the newly added features
> of PyFlnk in FLIP-137[1] and FLIP-139[2]
Jark Wu created FLINK-20470:
---
Summary: MissingNode can't be casted to ObjectNode when
deserializing JSON
Key: FLINK-20470
URL: https://issues.apache.org/jira/browse/FLINK-20470
Project: Flink
Jark Wu created FLINK-20460:
---
Summary: Support async lookup for HBase connector
Key: FLINK-20460
URL: https://issues.apache.org/jira/browse/FLINK-20460
Project: Flink
Issue Type: New Feature
Jark Wu created FLINK-20454:
---
Summary: Allow to read metadata for debezium-avro-confluent format
Key: FLINK-20454
URL: https://issues.apache.org/jira/browse/FLINK-20454
Project: Flink
Issue Type
Jark Wu created FLINK-20421:
---
Summary: Support canal-protobuf format
Key: FLINK-20421
URL: https://issues.apache.org/jira/browse/FLINK-20421
Project: Flink
Issue Type: Sub-task
> >
> > > > > +1
> > > > >
> > > > > And I want to second Arvid's mention of testcontainers [1].
> > > > >
> > > > > [1] https://www.testcontainers.org/
> > > > >
> > > > > On 18.11.20 10:43
Jark Wu created FLINK-20403:
---
Summary: Migrate test_table_shaded_dependencies.sh
Key: FLINK-20403
URL: https://issues.apache.org/jira/browse/FLINK-20403
Project: Flink
Issue Type: Sub-task
Jark Wu created FLINK-20401:
---
Summary: Migrate test_tpcds.sh
Key: FLINK-20401
URL: https://issues.apache.org/jira/browse/FLINK-20401
Project: Flink
Issue Type: Sub-task
Components: Table
Jark Wu created FLINK-20402:
---
Summary: Migrate test_tpch.sh
Key: FLINK-20402
URL: https://issues.apache.org/jira/browse/FLINK-20402
Project: Flink
Issue Type: Sub-task
Components: Table
Jark Wu created FLINK-20400:
---
Summary: Migrate test_streaming_sql.sh
Key: FLINK-20400
URL: https://issues.apache.org/jira/browse/FLINK-20400
Project: Flink
Issue Type: Sub-task
Jark Wu created FLINK-20399:
---
Summary: Migrate test_sql_client.sh
Key: FLINK-20399
URL: https://issues.apache.org/jira/browse/FLINK-20399
Project: Flink
Issue Type: Sub-task
Components
Jark Wu created FLINK-20398:
---
Summary: Migrate test_batch_sql.sh
Key: FLINK-20398
URL: https://issues.apache.org/jira/browse/FLINK-20398
Project: Flink
Issue Type: Sub-task
Components
Jark Wu created FLINK-20387:
---
Summary: Support column of TIMESTAMP WITH LOCAL ZONE TIME type as
rowtime attribute
Key: FLINK-20387
URL: https://issues.apache.org/jira/browse/FLINK-20387
Project: Flink
Jark Wu created FLINK-20386:
---
Summary: ClassCastException when lookup join a JDBC table on INT
UNSIGNED column
Key: FLINK-20386
URL: https://issues.apache.org/jira/browse/FLINK-20386
Project: Flink
Jark Wu created FLINK-20374:
---
Summary: Wrong result when shuffling changelog stream on
non-primary-key columns
Key: FLINK-20374
URL: https://issues.apache.org/jira/browse/FLINK-20374
Project: Flink
Jark Wu created FLINK-20370:
---
Summary: Result is wrong when sink primary key is not the same
with query
Key: FLINK-20370
URL: https://issues.apache.org/jira/browse/FLINK-20370
Project: Flink
Jark Wu created FLINK-20369:
---
Summary: Improve the digest of TableSourceScan and Sink node
Key: FLINK-20369
URL: https://issues.apache.org/jira/browse/FLINK-20369
Project: Flink
Issue Type
Hi Yuan,
Thanks for contributing to Flink. I have helped to merge this PR.
For the pull requests without JIRA id, it would be better to ping/request
review
from the committers in the PR (there is a suggestion reviewer in the right
sidebar).
Because such pull requests usually can't be notified to
Jark Wu created FLINK-20348:
---
Summary: Make "schema-registry.subject" optional for Kafka sink
with avro-confluent format
Key: FLINK-20348
URL: https://issues.apache.org/jira/browse/FLINK-20348
e it makes sense to extend the
> build_properties.sh check to test if the change includes any docs changes.
> If not, we could skip the check to save some resources.
>
> Do you want to open a PR for this?
>
> On Tue, Nov 24, 2020 at 10:03 AM Jark Wu wrote:
>
> > Is it p
Jark Wu created FLINK-20325:
---
Summary: Move docs_404_check to CI stage
Key: FLINK-20325
URL: https://issues.apache.org/jira/browse/FLINK-20325
Project: Flink
Issue Type: Test
Components
Jark Wu created FLINK-20320:
---
Summary: Support start SQL Client with an initialization SQL file
Key: FLINK-20320
URL: https://issues.apache.org/jira/browse/FLINK-20320
Project: Flink
Issue Type
Jark Wu created FLINK-20317:
---
Summary: Update Format Overview page to mention the supported
connector for upsert-kafka
Key: FLINK-20317
URL: https://issues.apache.org/jira/browse/FLINK-20317
Project
ing the check is a good idea since everything which is not
> > automated will be forgotten at some point.
> >
> > Cheers,
> > Till
> >
> > On Wed, Nov 18, 2020 at 12:56 PM Jark Wu wrote:
> >
> > > +1 for this. Would be better to run the `check_link
Jark Wu created FLINK-20313:
---
Summary: Test the debezium-avro-confluent format
Key: FLINK-20313
URL: https://issues.apache.org/jira/browse/FLINK-20313
Project: Flink
Issue Type: Sub-task
Jark Wu created FLINK-20309:
---
Summary: UnalignedCheckpointTestBase.execute is failed
Key: FLINK-20309
URL: https://issues.apache.org/jira/browse/FLINK-20309
Project: Flink
Issue Type: Bug
Jark Wu created FLINK-20289:
---
Summary: Computed columns can be calculated after
ChangelogNormalize to reduce shuffle
Key: FLINK-20289
URL: https://issues.apache.org/jira/browse/FLINK-20289
Project: Flink
Jark Wu created FLINK-20286:
---
Summary: Support streaming source for filesystem SQL connector
Key: FLINK-20286
URL: https://issues.apache.org/jira/browse/FLINK-20286
Project: Flink
Issue Type: New
Jark Wu created FLINK-20281:
---
Summary: Window aggregation supports changelog stream input
Key: FLINK-20281
URL: https://issues.apache.org/jira/browse/FLINK-20281
Project: Flink
Issue Type: New
+1 for this. Would be better to run the `check_links.sh` for broken links.
Btw, could we add the docs build and check into PR CI.
I think it would be better to guarantee this in the process.
Best,
Jark
On Wed, 18 Nov 2020 at 18:08, Till Rohrmann wrote:
> Hi everyone,
>
> I noticed in the last
+1 to use the Java-based testing framework and +1 for using docker images
in the future.
IIUC, the Java-based testing framework refers to the
`flink-end-to-end-tests-common` module.
The java-based framework helped us a lot when debugging the unstable e2e
tests.
Best,
Jark
On Wed, 18 Nov 2020 at
Jark Wu created FLINK-20205:
---
Summary: CDC source shouldn't send UPDATE_BEFORE messages if the
downstream doesn't need it
Key: FLINK-20205
URL: https://issues.apache.org/jira/browse/FLINK-20205
Project
Hi all,
The voting time for FLIP-145 has passed. I'm closing the vote now.
There were 8 +1 votes, 4 of which are binding:
- Pengcheng Liu
- Jark Wu (binding)
- Timo (binding)
- Dalong Liu
- Hailong Wang
- Jingsong Li (binding)
- Danny Chen
- Benchao Li (binding)
There were no disapproving votes
Jark Wu created FLINK-20162:
---
Summary: Fix time zone problems of some time related functions
Key: FLINK-20162
URL: https://issues.apache.org/jira/browse/FLINK-20162
Project: Flink
Issue Type: Bug
Jark Wu created FLINK-20150:
---
Summary: Add documentation for the debezium-avro format
Key: FLINK-20150
URL: https://issues.apache.org/jira/browse/FLINK-20150
Project: Flink
Issue Type: Sub-task
Jark Wu created FLINK-20102:
---
Summary: Update HBase connector documentation for HBase 2.x
supporting
Key: FLINK-20102
URL: https://issues.apache.org/jira/browse/FLINK-20102
Project: Flink
Issue
Jark Wu created FLINK-20101:
---
Summary: Fix the wrong documentation of FROM_UNIXTIME function
Key: FLINK-20101
URL: https://issues.apache.org/jira/browse/FLINK-20101
Project: Flink
Issue Type: Bug
+1 (binding)
On Tue, 10 Nov 2020 at 14:59, Jark Wu wrote:
> Hi all,
>
> There is new feedback on the FLIP-145. So I would like to start a new vote
> for FLIP-145 [1],
> which has been discussed and reached consensus in the discussion thread
> [2].
>
> The vote will be
Hi all,
There is new feedback on the FLIP-145. So I would like to start a new vote
for FLIP-145 [1],
which has been discussed and reached consensus in the discussion thread [2].
The vote will be open until 15:00 (UTC+8) 13th Nov. (72h), unless there is
an objection or not enough votes.
Best,
Hi all,
After some offline discussion and investigation with Timo and Danny, I have
updated the FLIP-145.
FLIP-145:
https://cwiki.apache.org/confluence/display/FLINK/FLIP-145%3A+Support+SQL+windowing+table-valued+function
Here are the updates:
1. Add SESSION window syntax and examples.
2. Time
Jark Wu created FLINK-20039:
---
Summary: Streaming File Sink end-to-end test is unstable on Azure
Key: FLINK-20039
URL: https://issues.apache.org/jira/browse/FLINK-20039
Project: Flink
Issue Type
Jark Wu created FLINK-19996:
---
Summary: Add end-to-end IT case for Debezium + Kafka + temporal
join
Key: FLINK-19996
URL: https://issues.apache.org/jira/browse/FLINK-19996
Project: Flink
Issue
Jark Wu created FLINK-19948:
---
Summary: CAST(now() as bigint) throws compile exception
Key: FLINK-19948
URL: https://issues.apache.org/jira/browse/FLINK-19948
Project: Flink
Issue Type: Bug
Sorry for the late reviewing. The community is busy preparing for 1.12
release.
I guess this is the reason that this PR was missed to review.
I will help to review this PR later.
Thanks for the contribution!
Best,
Jark
On Tue, 3 Nov 2020 at 10:36, JiaTao Tao wrote:
> Here I fix a potential
Jark Wu created FLINK-19889:
---
Summary: Supports nested projection pushdown for filesystem
connector of columnar format
Key: FLINK-19889
URL: https://issues.apache.org/jira/browse/FLINK-19889
Project: Flink
Jark Wu created FLINK-19873:
---
Summary: Skip DDL change events for Canal data
Key: FLINK-19873
URL: https://issues.apache.org/jira/browse/FLINK-19873
Project: Flink
Issue Type: Sub-task
Congrats Congxian!
Best,
Jark
On Thu, 29 Oct 2020 at 14:28, Yu Li wrote:
> Hi all,
>
> On behalf of the PMC, I’m very happy to announce Congxian Qiu as a new
> Flink committer.
>
> Congxian has been an active contributor for more than two years, with 226
> contributions including 76 commits
Jark Wu created FLINK-19859:
---
Summary: Add documentation for the upsert-kafka connector
Key: FLINK-19859
URL: https://issues.apache.org/jira/browse/FLINK-19859
Project: Flink
Issue Type: Sub-task
Jark Wu created FLINK-19858:
---
Summary: Add the new table factory for upsert-kafka connector
Key: FLINK-19858
URL: https://issues.apache.org/jira/browse/FLINK-19858
Project: Flink
Issue Type: Sub
Jark Wu created FLINK-19857:
---
Summary: FLIP-149: Introduce the upsert-kafka Connector
Key: FLINK-19857
URL: https://issues.apache.org/jira/browse/FLINK-19857
Project: Flink
Issue Type: New Feature
Jark Wu created FLINK-19824:
---
Summary: Refactor and merge SupportsComputedColumnPushDown and
SupportsWatermarkPushDown interfaces
Key: FLINK-19824
URL: https://issues.apache.org/jira/browse/FLINK-19824
Jark Wu created FLINK-19823:
---
Summary: Integrate Filesystem and Hive connector with changelog
format (e.g. debezium-json)
Key: FLINK-19823
URL: https://issues.apache.org/jira/browse/FLINK-19823
Project
+1
On Fri, 23 Oct 2020 at 15:25, Shengkai Fang wrote:
> Hi, all,
>
> I would like to start the vote for FLIP-149[1], which is discussed and
> reached a consensus in the discussion thread[2]. The vote will be open
> until 16:00(UTC+8) 28th Oct. (72h, exclude weekends), unless there is an
>
20年10月23日周五 下午2:25写道:
> >
> >> Thanks for explanation,
> >>
> >> I am OK for `upsert`. Yes, Its concept has been accepted by many
> systems.
> >>
> >> Best,
> >> Jingsong
> >>
> >> On Fri, Oct 23, 2020 at 12:38 PM Jark Wu wr
+1
Thanks for the work.
Best,
Jark
On Fri, 23 Oct 2020 at 10:13, Xintong Song wrote:
> Thanks Yadong, Mattias and Lining for reviving this FLIP.
>
> I've seen so many users confused by the current webui page of task manager
> metrics. This FLIP should definitely help them understand the
t; align the name to other available Flink connectors [1]:
> >
> > `connector=kafka-cdc`.
> >
> > Regards,
> > Timo
> >
> > [1] https://github.com/ververica/flink-cdc-connectors
> >
> > On 22.10.20 17:17, Jark Wu wrote:
> > > Anoth
ience, especially
> that
> such update mode option will implicitly make half of current kafka options
> invalid or doesn't
> make sense.
>
> Best,
> Kurt
>
>
> On Thu, Oct 22, 2020 at 10:31 PM Jark Wu wrote:
>
> > Hi Timo, Seth,
> >
> > The def
n"
> >
> > But sometimes users like to store a lineage of changes in their topics.
> > Indepent of any ktable/kstream interpretation.
> >
> > I let the majority decide on this topic to not further block this
> > effort. But we might find a better name like
out the connector
>end-offset = -- Some information about the boundedness
>model = table/stream -- Some information about interpretation
> )
>
>
> We can still apply all the constraints mentioned in the FLIP. When
> `model` is set to `table`.
>
> What do you th
> > > >> for many users who use Kafka and Flink SQL together. A few questions
> > and
> > > >> thoughts:
> > > >>
> > > >> * Is your example "Use KTable as a reference/dimension table"
> correct?
> > > It
> > &
Jark Wu created FLINK-19725:
---
Summary: DataStreamTests.test_key_by_on_connect_stream is failed
on Azure
Key: FLINK-19725
URL: https://issues.apache.org/jira/browse/FLINK-19725
Project: Flink
Jark Wu created FLINK-19718:
---
Summary: HiveTableSourceITCase.testStreamPartitionRead is not
stable on Azure
Key: FLINK-19718
URL: https://issues.apache.org/jira/browse/FLINK-19718
Project: Flink
questions in the mailing list asking how to model
a KTable and how to join a KTable in Flink SQL.
Best,
Jark
On Mon, 19 Oct 2020 at 19:53, Jark Wu wrote:
> Hi Jingsong,
>
> As the FLIP describes, "KTable connector produces a changelog stream,
> where each data record represents an
Hi Jingsong,
As the FLIP describes, "KTable connector produces a changelog stream, where
each data record represents an update or delete event.".
Therefore, a ktable source is an unbounded stream source. Selecting a
ktable source is similar to selecting a kafka source with debezium-json
format
+1
On Fri, 16 Oct 2020 at 10:27, admin <17626017...@163.com> wrote:
> +1
>
> > 2020年10月16日 上午10:05,Danny Chan 写道:
> >
> > +1, nice job !
> >
> > Best,
> > Danny Chan
> > 在 2020年10月15日 +0800 PM8:08,Jingsong Li ,写道:
> >> Hi all,
> >>
> >> I would like to start the vote for FLIP-146 [1], which is
=> TABLE InputTable
> timecol => DESCRIPTOR(timestamp)
> gap => INTERVAL '5' MINUTES
>)
> )
> GROUP BY userId, window_start, window_end;
>
> SELECT *
> FROM TABLE(
>Session (
> data =>
> TABLE InputTable
> PARTITION BY us
Hi,
Thanks for bringing this discussion.
I think limiting the key type to Long can't resolve the comparison problem,
because the bytes order and value order of negative numbers is different.
Unless, we limit the key type to positive Long. But how to check this
before submitting a job?
In Blink
> 2016 standard ranging from `CSVreader` to `UDjoin` are impressive.
>
> Regards,
> Timo
>
> [1]
>
> https://www.researchgate.net/profile/Fred_Zemke/publication/329593276_The_new_and_improved_SQL2016_standard/links/5c17eb50a6fdcc494ffc5999/The-new-and-improved-SQL2016-standard.pdf
>
6AD-F696B5D36D56
[3]: https://oracle-base.com/articles/18c/polymorphic-table-functions-18c
[4]:
https://docs.oracle.com/en/database/oracle/oracle-database/18/lnpls/plsql-optimization-and-tuning.html#GUID-695FBA1A-89EA-45B4-9C81-CA99F6C794A5
On Tue, 13 Oct 2020 at 18:42, Jark Wu wrote:
> Hi everyone,
&g
standards.iso.org/ittf/PubliclyAvailableStandards/c069776_ISO_IEC_TR_19075-7_2017.zip
[2]:
https://lists.apache.org/x/thread.html/4a91632b1c780ef9d67311f90fce626582faae7d30a134a768c3d324@%3Cdev.calcite.apache.org%3E
On Sat, 10 Oct 2020 at 17:59, Jark Wu wrote:
> Hi everyone,
>
> Thanks everyone
3d6%40%3Cdev.flink.apache.org%3E
>
> [2]
>
> https://lists.apache.org/thread.html/rb1dc7565fdde83063d663e3ff0bbec5e2dbd521247b4dcd28174127f%40%3Cdev.flink.apache.org%3E
>
> [3]
>
> https://www.doag.org/formes/pubfiles/11270472/2019-SQL-Andrej_Pashchenko-Polymorphic_Table_Functions_in_18c_Einfuehrung_un
Jark Wu created FLINK-19612:
---
Summary: Support CUMULATE window table function in planner
Key: FLINK-19612
URL: https://issues.apache.org/jira/browse/FLINK-19612
Project: Flink
Issue Type: Sub-task
Jark Wu created FLINK-19611:
---
Summary: Introduce WindowProperties MetadataHandler to propagate
window properties
Key: FLINK-19611
URL: https://issues.apache.org/jira/browse/FLINK-19611
Project: Flink
501 - 600 of 1588 matches
Mail list logo