+1 (non-binding)
- Checked sign
- Checked the hash
- Checked tag
- Build from source
Best,
Yuxin
weijie guo 于2023年5月29日周一 14:14写道:
> +1 (non-binding)
>
> - checked sign and checksum
> - checked tag in github repository
> - compiled from source
> - checked the web PR
>
> BTW, please remember
+1, the fallback was just intended as a temporary workaround to run
catalog/module related statements with hive dialect.
On Mon, May 29, 2023 at 3:59 PM Benchao Li wrote:
> Big +1 on this, thanks yuxia for driving this!
>
> yuxia 于2023年5月29日周一 14:55写道:
>
> > Hi, community.
> >
> > I want to
Thanks Ron for your information.
I suggest that it can be written in the Motivation of FLIP.
Best,
Jingsong
On Tue, May 30, 2023 at 9:57 AM liu ron wrote:
>
> Hi, Jingsong
>
> Thanks for your review. We have tested it in TPC-DS case, and got a 12%
> gain overall when only supporting only Calc
Leijurv created FLINK-32215:
---
Summary: Date format in flink-statefun startupPosition
documentation is incorrect
Key: FLINK-32215
URL: https://issues.apache.org/jira/browse/FLINK-32215
Project: Flink
Xin Hao created FLINK-32214:
---
Summary: Fetch more info in the FlinkDeployment status by using
the overview API
Key: FLINK-32214
URL: https://issues.apache.org/jira/browse/FLINK-32214
Project: Flink
Let me correct the typo in the last paragraph as below:
To make the problem even harder, the incoming traffic can be spiky. And the
overhead of triggering checkpointing can be relatively low, in which case
it might be more performant (w.r.t. e2e lag) for the Flink job to
checkpoint at the more
Hi, Jingsong
Thanks for your review. We have tested it in TPC-DS case, and got a 12%
gain overall when only supporting only Calc operator. In
some queries, we even get more than 30% gain, it looks like an effective
way.
Best,
Ron
Jingsong Li 于2023年5月29日周一 14:33写道:
> Thanks Ron for the
Fang Yong created FLINK-32213:
-
Summary: Add get off heap buffer in memory segment
Key: FLINK-32213
URL: https://issues.apache.org/jira/browse/FLINK-32213
Project: Flink
Issue Type: Improvement
Hi, Feng.
Notice this FLIP only support batch mode for time travel. Would it also make
sense to support stream mode to a read a snapshot of the table as a bounded
stream?
Best regards,
Yuxia
- 原始邮件 -
发件人: "Benchao Li"
收件人: "dev"
发送时间: 星期一, 2023年 5 月 29日 下午 6:04:53
主题: Re: [DISCUSS]
Hi Piotrek,
Thanks for providing more details of the alternative approach!
If I understand your proposal correctly, here are the requirements for it
to work without incurring any regression:
1) The source needs a way to determine whether there exists backpressure.
2) If there is backpressure,
> I’d assumed that there wasn’t a good way to migrate state stored with an
> older version of Kryo to a newer version - if you’ve solved that, kudos.
I hope I've solved this. The pull request is supposed to do exactly this.
Please let me know if you can propose a scenario that would break this.
Hi Kurt,
I personally think it’s a very nice improvement, and that the longer-term goal
of removing built-in Kryo support for state serialization (while a good one)
warrants a separate FLIP.
Perhaps an intermediate approach would be to disable the use of Kryo for state
serialization by
Hi everyone. I would like to start the discussion thread for FLIP-317: Upgrade
Kryo from 2.24.0 to 5.5.0 [1].
There is a pull-request associated with this linked in the FLIP.
I'd particularly like to hear about:
- Chesnay Schepler's request to consider removing Kryo serializers from the
Matheus Felisberto created FLINK-32212:
--
Summary: Job restarting indefinitely after an
IllegalStateException from BlobLibraryCacheManager
Key: FLINK-32212
URL:
Hi, Shammon
Thanks for driving this Flip, [Support Customized Job Meta Data Listener]
will make it easier for Flink to collect lineage information.
I fully agree with the overall solution and have a small question:
1. Will an exception thrown by the listener affect the normal execution
process?
Hi Samrat,
Regarding the sink, will it only support append-only tables (no changelog
mode support)? It looks like it's the case, so IMO, it should be mentioned
in the limitations section.
On Sun, May 28, 2023 at 9:52 PM Samrat Deb wrote:
> Hello all ,
>
> Context:
> Amazon Redshift [1] is a
Hi
@Jing
> Your proposal to dynamically adjust the checkpoint intervals is elegant!
It
> makes sense to build it as a generic feature in Flink. Looking forward to
> it. However, for some user cases, e.g. when users were aware of the
bounded
> sources (in the HybridSource) and care more about the
Hi yuxia
> But from the code in Proposed Changes, once we register the Catalog, we
initialize it and open it. right?
Yes, In order to avoid inconsistent semantics of the original CREATE
CATALOG DDL, Catalog will be directly initialized in registerCatalog so
that parameter validation can be
Hi, Feng.
I'm trying to understanding the meaning of *lazy initialization*. If i'm wrong,
please correct me.
IIUC, lazy initialization means only you need to access the catalog, then you
initialize it. But from the code in Proposed Changes, once we register the
Catalog,
we initialize it and
Hi team,
I’d like to start a discussion about FLIP-316 [1], which introduces a SQL
driver as the
default main class for Flink SQL jobs.
Currently, Flink SQL could be executed out of the box either via SQL
Client/Gateway
or embedded in a Flink Java/Python program.
However, each one has its
# Can Calcite support this syntax ` VERSION AS OF` ?
This also depends on whether this is defined in standard or any known
databases that have implemented this. If not, it would be hard to push it
to Calcite.
# getTable(ObjectPath object, long timestamp)
Then we again come to the problem of
Hi Feng,
Thanks for your effort! +1 for the proposal.
One of the major changes is that current design will provide
Map catalogs as a snapshot instead of a cache, which means
once it has been initialized, any changes done by other sessions will not
affect it. Point 6 described follow-up options
Hi all,
Thanks for the detailed responses!
I’ve taken a closer look, and as far as I can tell, this is an existing problem
with the checkpointing mechanism. This is also tracked in this JIRA [1], but
not fixed. The PR attached to the JIRA temporarily blocks the events being sent
from
Hi all,
Thanks for the detailed responses!
I’ve taken a closer look, and as far as I can tell, this is an existing problem
with the checkpointing mechanism. This is also tracked in this JIRA [1], but
not fixed. The PR attached to the JIRA temporarily blocks the events being sent
from
Hi all,
Thanks for the detailed responses!
I’ve taken a closer look, and as far as I can tell, this is an existing problem
with the checkpointing mechanism. This is also tracked in this JIRA [1], but
not fixed. The PR attached to the JIRA temporarily blocks the events being sent
from
Thanks Qingsheng for taking care of this!
Best regards,
Jing
On Mon, May 29, 2023 at 4:24 AM Qingsheng Ren wrote:
> Hi Kurt,
>
> The permission has been granted. Feel free to reach out if you have any
> questions.
>
> Looking forward to your FLIP!
>
> Best,
> Qingsheng
>
> On Mon, May 29, 2023
Hi Weijie,
Thanks for your contribution and feedback! In case there are some reasons
not to allow us to upgrade them, we still can leverage virtualenv or pipenv
to create a dedicated environment for Flink release. WDYT?
cc Dian Fu
@Dian
I was wondering if you know the reason. Thanks!
Best
Big +1 on this, thanks yuxia for driving this!
yuxia 于2023年5月29日周一 14:55写道:
> Hi, community.
>
> I want to start the discussion about Hive dialect shouldn't fall back to
> Flink's default dialect.
>
> Currently, when the HiveParser fail to parse the sql in Hive dialect,
> it'll fall back to
hi, thanks for your reply.
@Benchao
> did you consider the pushdown abilities compatible
In the current design, the implementation of TimeTravel is delegated to
Catalog. We have added a function called getTable(ObjectPath tablePath,
long timestamp) to obtain the corresponding CatalogBaseTable at
Hi, community .
I want to start the discussion about Hive dialect shouldn't fall back to
Flink's default dialect.
Currently, when the HiveParser fail to parse the sql in Hive dialect, it'll
fall back to Flink's default parser[1] to handle flink-specific statements like
"CREATE CATALOG xx
Thanks Yuxia for the proposal.
> CALL [catalog_name.][database_name.]procedure_name ([ expression [,
> expression]* ] )
The expression can be a function call. Does this need to be a function
call? Do you have some example?
> Procedure returns T[]
Procedure looks like a TableFunction, do you
Fang Yong created FLINK-32211:
-
Summary: Supports row format in ExecutorImpl for jdbc driver
Key: FLINK-32211
URL: https://issues.apache.org/jira/browse/FLINK-32211
Project: Flink
Issue Type:
Thanks Ron for the proposal.
Do you have some benchmark results for the performance improvement? I
am more concerned about the improvement on Flink than the data in
other papers.
Best,
Jingsong
On Mon, May 29, 2023 at 2:16 PM liu ron wrote:
>
> Hi, dev
>
> I'd like to start a discussion about
Hi, everyone.
I’d like to start a discussion about FLIP-311: Support Call Stored Procedure
[1]
Stored procedure provides a convenient way to encapsulate complex logic to
perform data manipulation or administrative tasks in external storage systems.
It's widely used in traditional databases
Hi, dev
I'd like to start a discussion about FLIP-315: Support Operator Fusion
Codegen for Flink SQL[1]
As main memory grows, query performance is more and more determined by the
raw CPU costs of query processing itself, this is due to the query
processing techniques based on interpreted
+1 (non-binding)
- checked sign and checksum
- checked tag in github repository
- compiled from source
- checked the web PR
BTW, please remember to update docs/jdbc.yaml for the v3.1 branch after the
release is completed.
Best regards,
Weijie
Jing Ge 于2023年5月29日周一 04:26写道:
> +1
36 matches
Mail list logo