+1 (non-binding)
Verified based on this wiki[1].
- Verified signatures and sha512
- The source archives do not contain any binaries
- Build the source with Maven 3 and java8 (Checked the license as well)
- bin/start-cluster.sh with java8, it works fine and no any unexpected LOG
- Ran demo, it's
Jayadeep Jayaraman created FLINK-33603:
--
Summary: Fix guava shading for GCS connector
Key: FLINK-33603
URL: https://issues.apache.org/jira/browse/FLINK-33603
Project: Flink
Issue Type:
Hi Becket,
> Additionally, SplitFetcherTask requires FutureCompletingBlockingQueue as
a constructor parameter, which is not allowed now.
Sorry, it was my writing mistake. What I meant is that *SplitFetcher*
requires FutureCompletingBlockingQueue as a constructor parameter. SplitFetcher
is a
The FLIP looks good to me now, let's start the vote.
Dawid Wysakowicz 于2023年11月20日周一 22:36写道:
>
> @Benchao I added an example to the page.
>
> If there are no further comments, I'll start a vote on the FLIP tomorrow or
> the next day.
>
> Best,
> Dawid
>
> On Fri, 17 Nov 2023 at 12:20, xiangyu
I've created a task for this [1]. But it should not be a block for
Connector / Pulsar 4.1.0.
Best,
tison.
[1] https://issues.apache.org/jira/browse/FLINK-33602
tison 于2023年11月10日周五 19:44写道:
> > does it include support for Flink 1.18?
>
> Not yet. Tests for 1.16 and 1.17 can pass, but the
Zili Chen created FLINK-33602:
-
Summary: Pulsar connector should be compatible with Flink 1.18
Key: FLINK-33602
URL: https://issues.apache.org/jira/browse/FLINK-33602
Project: Flink
Issue Type:
Jim Hughes created FLINK-33601:
--
Summary: Implement restore tests for Expand node
Key: FLINK-33601
URL: https://issues.apache.org/jira/browse/FLINK-33601
Project: Flink
Issue Type: Sub-task
Hello devs,
Is any active work happening on this FLIP? As far as I see there
are blockers that needs to happen first to implement regarding
artifact distribution.
Is this work in halt completetly or some efforts are going into
resolve the blockers first or something?
Our platform would benefit
+1 (binding)
1. Downloaded the archives, checksums, and signatures
2. Verified the signatures and checksums
3. Extract and inspect the source code for binaries
4. Compiled and tested the source code via mvn verify
5. Verified license files / headers
6. Deployed helm chart to test cluster
7. Build
The paper looks interesting, but it might not manifest the described
benefit for practical reasons:
1. It forces you to remember all keys in the broadcasted (partitioned is
impossible without timeouts, etc.) operator state. Forever. This itself is
a blocker for a bunch of pipelines. The primary
Hi Danny,
> My current proposal is that the REST API should not leave the Flink
cluster
in an inconsistent state.
Regarding consistency, Flink only cares about individual jobs, but I can
see your point.
For streaming, this is probably something we could address by book-keeping
jobs submitted by
+1 (binding)
- Verified Helm repo works as expected, points to correct image tag, build,
version
- Verified basic examples + checked operator logs everything looks as
expected
- Verified hashes, signatures and source release contains no binaries
- Ran built-in tests, built jars + docker image
Jark Wu created FLINK-33600:
---
Summary: Print cost time for batch queries in SQL Client
Key: FLINK-33600
URL: https://issues.apache.org/jira/browse/FLINK-33600
Project: Flink
Issue Type: New
Dawid Wysakowicz created FLINK-33599:
Summary: Run restore tests with RocksDB state backend
Key: FLINK-33599
URL: https://issues.apache.org/jira/browse/FLINK-33599
Project: Flink
Issue
@Benchao I added an example to the page.
If there are no further comments, I'll start a vote on the FLIP tomorrow or
the next day.
Best,
Dawid
On Fri, 17 Nov 2023 at 12:20, xiangyu feng wrote:
> >After this FLIP is done, FLINK-25015() can utilize this ability to set
> > job name for queries.
Yun Tang created FLINK-33598:
Summary: Watch HA configmap via name instead of lables to reduce
pressure on APIserver
Key: FLINK-33598
URL: https://issues.apache.org/jira/browse/FLINK-33598
Project:
+1 (binding)
Verified:
- Release files, maven repo contents, checksums, signature
- Verified and installed from Helm chart
- Ran basic stateful example and verified
- Upgrade flow
- No errors in logs
- Autoscaler (turn on/off, verify configmap cleared correctly)
- In-place scaling
Hi everyone,
Currently, the Datagen connector generates data that doesn't match the schema
definition
when dealing with fixed-length and variable-length fields. It defaults to a
unified length of 100
and requires manual configuration by the user. This violates the correctness of
schema
Thanks Danny for driving the release,
+1 (non-binding)
- built from source code succeeded
- verified signatures
- verified hashsums
- checked release notes
Best,
Jiabao
> 2023年11月20日 19:11,Danny Cranmer 写道:
>
> Hello all,
>
> +1 (binding).
>
> - Release notes look good
> - Signatures and
Hello all,
+1 (binding).
- Release notes look good
- Signatures and checksums match
- There are no binaries in the source archive
- pom versions are correct
- Tag is present in Github
- CI passes against FLink 1.17 and 1.18
- Source build and tests pass
Thanks,
Danny
On Wed, Nov 1, 2023 at
Dawid Wysakowicz created FLINK-33597:
Summary: Can not use a nested column for a join condition
Key: FLINK-33597
URL: https://issues.apache.org/jira/browse/FLINK-33597
Project: Flink
yunfan created FLINK-33596:
--
Summary: Support fold expression before transfer to RexNode
Key: FLINK-33596
URL: https://issues.apache.org/jira/browse/FLINK-33596
Project: Flink
Issue Type: Bug
Hi all,
I'd like to start a discussion of FLIP-395: Deprecate Global Aggregator Manager
[1].
Global Aggregate Manager was introduced in [2] to support event time
synchronization across sources and more generally, coordination of parallel
tasks. AFAIK, this was only used in the Kinesis source
Thanks Leonard for the detailed feedback and input.
> The 'Max source parallelism’ is the information that runtime offered to
Source as a hint to infer the actual parallelism, a name with max prefix
but calculated > with minimum value confusing me a lot, especially when I
read the HiveSource
Hi Jing!
> the upcoming OpenTelemetry based TraceReporter will use the same Span
> implementation and will not support trace_id and span_id. Does it make
> sense to at least add the span_id into the current Span design? The
default
> implementation could follow your suggestion:
Hi Roman!
> 1. why the existing MetricGroup interface can't be used? It already had
> methods to add metrics and spans ...
That's because of the need to:
a) associate the spans to specifically Job's initialisation
b) we need to logically aggregate the span's attributes across subtasks.
Hi Junrui,
Thanks for the clarification. On one hand, adding more methods into the
RuntimeContext flat will increase the effort for users who will use
RuntimeContext. But the impact is limited. It is fine. The big impact is,
on the other hand, for users who want to focus on the execution config,
27 matches
Mail list logo