Hi Roman,
Thanks for the proposal.
I am just curious whether this could really bring benefits to our users with
such complex configuration logic. Could you share some real experimental
results?
BTW, as talked before, I am not sure whether different lifecycles of RocksDB
state-backends would
Hi, all
Sorry for late response. As Shengkai mentioned, Currently I’m working with him
on SQL Client, dedicating to implement the Remote Mode of SQL Client. I have
written a draft of implementation plan and will write a FLIP about it ASAP. If
you are interested in, please take a look at the
Hi Piotr,
I also think the scenario mentioned in this FLIP is useful to address. I am
happy to discuss this together and figure out the more usable APIs.
I guess the choice of API pretty much depends on when users need to use it.
I am assuming it is needed when using dataStream.window(...). Is
钟洋洋 created FLINK-29970:
---
Summary: Prometheus cannot collect flink metrics
Key: FLINK-29970
URL: https://issues.apache.org/jira/browse/FLINK-29970
Project: Flink
Issue Type: Bug
Components:
Rui Fan created FLINK-29969:
---
Summary: Show the root cause when exceeded checkpoint tolerable
failure threshold
Key: FLINK-29969
URL: https://issues.apache.org/jira/browse/FLINK-29969
Project: Flink
Caizhi Weng created FLINK-29968:
---
Summary: Update streaming query document for Table Store to
include full compaction changelog producer
Key: FLINK-29968
URL: https://issues.apache.org/jira/browse/FLINK-29968
lincoln lee created FLINK-29967:
---
Summary: Optimize determinism requirements from sink node with
considering that SinkUpsertMaterializer already supports upsertKey
Key: FLINK-29967
URL:
Huang Xingbo created FLINK-29966:
Summary: Replace and redesign the Python api documentation base
Key: FLINK-29966
URL: https://issues.apache.org/jira/browse/FLINK-29966
Project: Flink
Issue
Hi Roman,
Thanks for the proposal! This will make scheduling a lot more flexible for our
use case.
Just a quick follow-up question about the number of new configs we’re adding
here. It seems like the proposed configs provide a lot of flexibility, but at
the expense of added complexity.
It
Jane Chan created FLINK-29965:
-
Summary: Support Spark/Hive with S3
Key: FLINK-29965
URL: https://issues.apache.org/jira/browse/FLINK-29965
Project: Flink
Issue Type: Sub-task
Jane Chan created FLINK-29964:
-
Summary: Support Spark/Hive with OSS
Key: FLINK-29964
URL: https://issues.apache.org/jira/browse/FLINK-29964
Project: Flink
Issue Type: Sub-task
Jane Chan created FLINK-29963:
-
Summary: Flink Table Store supports pluggable filesystem
Key: FLINK-29963
URL: https://issues.apache.org/jira/browse/FLINK-29963
Project: Flink
Issue Type:
John Roesler created FLINK-29962:
Summary: Exclude Jamon 2.3.1
Key: FLINK-29962
URL: https://issues.apache.org/jira/browse/FLINK-29962
Project: Flink
Issue Type: Improvement
Mingliang Liu created FLINK-29961:
-
Summary: Make referencing custom image clearer for Docker
Key: FLINK-29961
URL: https://issues.apache.org/jira/browse/FLINK-29961
Project: Flink
Issue
Hi Yun Gao,
FYI I just updated the article after your review:
https://echauchot.blogspot.com/2022/11/flink-howto-migrate-real-life-batch.html
Best
Etienne
Le 09/11/2022 à 10:04, Etienne Chauchot a écrit :
Hi Yun Gao,
thanks for your email and your review !
My comments are inline
Le
OK, removed the fallback part.
Thanks for the help!
G
On Wed, Nov 9, 2022 at 5:03 PM Őrhidi Mátyás
wrote:
> looks better! no further concerns
>
> Cheers,
> Matyas
>
> On Mon, Nov 7, 2022 at 9:21 AM Gabor Somogyi
> wrote:
>
> > Oh gosh, copied wrong config keys so fixed my last mail with
looks better! no further concerns
Cheers,
Matyas
On Mon, Nov 7, 2022 at 9:21 AM Gabor Somogyi
wrote:
> Oh gosh, copied wrong config keys so fixed my last mail with green.
>
> On Mon, Nov 7, 2022 at 6:07 PM Gabor Somogyi
> wrote:
>
> > Hi Matyas,
> >
> > In the meantime I was thinking about
Danny Cranmer created FLINK-29960:
-
Summary: Update README to Reflect AWS Connectors Single Repo
Key: FLINK-29960
URL: https://issues.apache.org/jira/browse/FLINK-29960
Project: Flink
Issue
Gyula Fora created FLINK-29959:
--
Summary: Use optimistic locking when patching resource status
Key: FLINK-29959
URL: https://issues.apache.org/jira/browse/FLINK-29959
Project: Flink
Issue Type:
Hi Yanfei,
Thanks, good questions
> 1. Is shared-memory only for the state backend? If both
> "taskmanager.memory.managed.shared-fraction: >0" and
> "state.backend.rocksdb.memory.managed: false" are set at the same time,
> will the shared-memory be wasted?
Yes, shared memory is only for the
Chesnay Schepler created FLINK-29958:
Summary: Add new connector_artifact shortcode
Key: FLINK-29958
URL: https://issues.apache.org/jira/browse/FLINK-29958
Project: Flink
Issue Type:
Chesnay Schepler created FLINK-29957:
Summary: Rework connector docs integration
Key: FLINK-29957
URL: https://issues.apache.org/jira/browse/FLINK-29957
Project: Flink
Issue Type:
Hi,
And by the way, I was planing on writing another article to compare the
performances of DataSet, DataStream and SQL APIs over TPCDS query3. I
thought that I could run the pipelines on an Amazon EMR cluster with
different data sizes 1GB, 100GB, 1TB.
Would it be worth it, what do you
Matthias Pohl created FLINK-29956:
-
Summary: Kafka-related test infrastructure code is scattered over
multiple classes/environments
Key: FLINK-29956
URL: https://issues.apache.org/jira/browse/FLINK-29956
Hi everyone,
I agree with Xintong in the sense that I don't see what has changed since
the original decision on this topic. In my opinion, there is a high cost in
moving to ASF now, namely I fear we will loose many of the >1200 members
and the momentum that I see in the workspace. To me there
Jeff Yang created FLINK-29955:
-
Summary: Upgrade hogo version to v0.104.0
Key: FLINK-29955
URL: https://issues.apache.org/jira/browse/FLINK-29955
Project: Flink
Issue Type: Improvement
I'm happy to announce that we have unanimously approved this release.
There are 3 approving votes, 3 of which are binding:
* Danny
* Martijn
* Chesnay
There are no disapproving votes.
Thanks everyone!
On 08/11/2022 11:09, Martijn Visser wrote:
+1 (binding)
- Downloaded artifacts
- Checked
Hi Yun Gao,
thanks for your email and your review !
My comments are inline
Le 08/11/2022 à 06:51, Yun Gao a écrit :
Hi Etienne,
Very thanks for the article! Flink is currently indeed keeping
increasing the
ability of unified batch / stream processing with the same api, and
its a great
Hi all,
Big thanks to Yun Gao for driving this!
> I wonder whether we need to add a new option when registering timers.
Won't
> it be sufficient to flush all pending timers on termination but not allow
> new ones to be registered?
Maximilian, I'm sure that single semantics is not enough. All
Hi Alexander!
Thanks for your interest in Flink Table Store and I'm glad to share my
thoughts with you.
Given that there is always a single writer to a stream, in what situations
> can concurrent writes ever happen to Flink Table Store?
I'm not the author of FLIP so I'm not sure what "writer"
30 matches
Mail list logo