Regarding the implementation, did you consider the pushdown abilities
compatible, e.g., projection pushdown, filter pushdown, partition pushdown.
Since `Snapshot` is not handled much in existing rules, I have a concern
about this. Of course, it depends on your implementation detail, what is
Junrui Li created FLINK-32199:
-
Summary: MetricStore does not remove metrics of nonexistent
parallelism in TaskMetricStore when scale down job parallelism
Key: FLINK-32199
URL:
Panagiotis Garefalakis created FLINK-32198:
--
Summary: Enforce single maxExceptions query parameter
Key: FLINK-32198
URL: https://issues.apache.org/jira/browse/FLINK-32198
Project: Flink
Mason Chen created FLINK-32197:
--
Summary: FLIP 246: Multi Cluster Kafka Source
Key: FLINK-32197
URL: https://issues.apache.org/jira/browse/FLINK-32197
Project: Flink
Issue Type: New Feature
Sharon Xie created FLINK-32196:
--
Summary: KafkaWriter recovery doesn't abort lingering transactions
under EO semantic
Key: FLINK-32196
URL: https://issues.apache.org/jira/browse/FLINK-32196
Project:
Elkhan Dadashov created FLINK-32195:
---
Summary: Add SQL Gateway custom headers support
Key: FLINK-32195
URL: https://issues.apache.org/jira/browse/FLINK-32195
Project: Flink
Issue Type: New
I'm happy to announce that we have unanimously approved this release.
There are 6 approving votes, 3 of which are binding:
* Etienne Chauchot
* Khanh Vu
* Martijn Visser (binding)
* Ryan Skraba
* Danny Cranmer (binding)
* Leonard Xu (binding)
There are no disapproving votes.
Thanks everyone!
This vote is now closed, I will announce the results in a separate thread.
Danny
On Thu, May 25, 2023 at 5:40 PM Leonard Xu wrote:
> +1 (binding)
>
> - built from source code succeeded
> - verified signatures
> - verified hashsums
> - checked Github release tag
> - checked release notes
> -
+1 (binding)
- built from source code succeeded
- verified signatures
- verified hashsums
- checked Github release tag
- checked release notes
- checked the contents contains jar and pom files in apache repo
- reviewed the web PR
Best,
Leonard
> On May 24, 2023, at 4:38 PM, Danny Cranmer
Hey all,
Please review and vote on the release candidate #1 for the version 3.0.1 of the
Apache Flink Pulsar Connector as follows:
[ ] +1, Approve the release
[ ] -1, Do not approve the release (please provide specific comments)
The complete staging area is available for your review, which
Hi Dong, Hi Piotr,
Thanks for the clarification.
@Dong
According to the code examples in the FLIP, I thought we are focusing on
the HybridSource scenario. With the current HybridSource implementation, we
don't even need to know the boundedness of sources in the HybridSource,
since all sources
Hi all,
This discussion thread is to gauge community opinion and gather feedback on
implementing a better exception hierarchy in Flink to identify exceptions that
come from running “User job code” and exceptions coming from “Flink engine
code”.
Problem:
Flink provides a distributed processing
Yuxin Tan created FLINK-32194:
-
Summary: Elasticsearch connector should remove the dependency on
flink-shaded
Key: FLINK-32194
URL: https://issues.apache.org/jira/browse/FLINK-32194
Project: Flink
Thanks for your reply
@Timo @BenChao @yuxia
Sorry for the mistake, Currently , calcite only supports `FOR SYSTEM_TIME
AS OF ` syntax. We can only support `FOR SYSTEM_TIME AS OF` . I've
updated the syntax part of the FLIP.
@Timo
> We will convert it to TIMESTAMP_LTZ?
Yes, I think we need
Yuxin Tan created FLINK-32193:
-
Summary: AWS connector removes the dependency on flink-shaded
Key: FLINK-32193
URL: https://issues.apache.org/jira/browse/FLINK-32193
Project: Flink
Issue Type:
Hi Piotr,
Thanks for the discussion. Please see my comments inline.
On Thu, May 25, 2023 at 6:34 PM Piotr Nowojski wrote:
> Hi all,
>
> Thanks for the discussion.
>
> @Dong
>
> > In the target use-case, we would like to HybridSource to trigger>
> checkpoint more frequently when it is read the
Sergey Nuyanzin created FLINK-32192:
---
Summary: JsonBatchFileSystemITCase fail due to Process Exit Code:
239 (because of NoClassDefFoundError
akka.actor.dungeon.FaultHandling$$anonfun$handleNonFatalOrInterruptedException$1)
dizhou cao created FLINK-32191:
--
Summary: Support for configuring keepalive related parameters.
Key: FLINK-32191
URL: https://issues.apache.org/jira/browse/FLINK-32191
Project: Flink
Issue
Claude Warren created FLINK-32190:
-
Summary: Bad link in Flink page
Key: FLINK-32190
URL: https://issues.apache.org/jira/browse/FLINK-32190
Project: Flink
Issue Type: Bug
Thanks Feng for bringing this up. It'll be great to introduce time travel to
Flink to have a better integration with external data soruces.
I also share same concern about the syntax.
I see in the part of `Whether to support other syntax implementations` in this
FLIP, seems the syntax in
Gentlemen,
I have problem with some apache-flink modules. I am running a 1.17.0
apache- flink and I write test codes in Colab I faced a problem for import
modules
from pyflink.table import DataTypes
from pyflink.table.descriptors import Schema, Kafka, Json, Rowtime
from
Hi Weijie,
Thanks again for driving it. I was wondering if you are able to share the
estimated date when the 1.16.2 and 1.17.1 releases will be officially
announced after the voting is closed? Thanks!
Best regards,
Jing
On Thu, May 25, 2023 at 9:46 AM weijie guo
wrote:
> I'm happy to announce
Thanks Feng, it's exciting to have this ability.
Regarding the syntax section, are you proposing `AS OF` instead of `FOR
SYSTEM AS OF` to do this? I know `FOR SYSTEM AS OF` is in the SQL standard
and has been supported in some database vendors such as SQL Server. About
`AS OF`, is it in the
Also: How do we want to query the most recent version of a table?
`AS OF CURRENT_TIMESTAMP` would be ideal, but according to the docs both
the type is TIMESTAMP_LTZ and what is even more concerning is the it
actually is evalated row-based:
> Returns the current SQL timestamp in the local
Sergey Nuyanzin created FLINK-32189:
---
Summary: Integration tests fail due to Process Exit Code: 239 and
NoClassDefFound in logs
Key: FLINK-32189
URL: https://issues.apache.org/jira/browse/FLINK-32189
Hi all,
Thanks for the discussion.
@Dong
> In the target use-case, we would like to HybridSource to trigger>
checkpoint more frequently when it is read the Kafka Source (than when it
> is reading the HDFS source). We would need to set a flag for the
checkpoint
> trigger to know which source the
Hi Feng,
thanks for proposing this FLIP. It makes a lot of sense to finally
support querying tables at a specific point in time or hopefully also
ranges soon. Following time-versioned tables.
Here is some feedback from my side:
1. Syntax
Can you elaborate a bit on the Calcite restrictions?
Hi, everyone.
I’d like to start a discussion about FLIP-308: Support Time Travel In Batch
Mode [1]
Time travel is a SQL syntax used to query historical versions of data. It
allows users to specify a point in time and retrieve the data and schema of
a table as it appeared at that time. With time
Xin Chen created FLINK-32188:
Summary: Does the custom connector not support pushing down
"where" query predicates to query fields of array type?
Key: FLINK-32188
URL:
Sergey Nuyanzin created FLINK-32187:
---
Summary: Remove dependency on flink-shaded
Key: FLINK-32187
URL: https://issues.apache.org/jira/browse/FLINK-32187
Project: Flink
Issue Type:
Yu Chen created FLINK-32186:
---
Summary: Support subtask stack auto-search when redirecting from
subtask backpressure tab
Key: FLINK-32186
URL: https://issues.apache.org/jira/browse/FLINK-32186
Project:
I'm happy to announce that we have unanimously approved this release.
There are 7 approving votes, 3 of which are binding:
* Xintong Song(binding)
* Yuxin Tan
* Xingbo Huang(binding)
* Yun Tang
* Jing Ge
* Qingsheng Ren(binding)
* Benchao Li
There are no disapproving votes.
I'll
32 matches
Mail list logo