Thanks for creating RC1
* Downloaded artifacts
* Built from sources
* Verified checksums and gpg signatures
* Verified versions in pom files
* Checked NOTICE, LICENSE files
The strange thing I faced is
CheckpointAfterAllTasksFinishedITCase.testRestoreAfterSomeTasksFinished
fails on AZP [1]
Hi David,
It’s a deliberate choice to decouple the connectors. We shouldn’t block
Flink 1.18 on connector statuses. There’s already work being done to fix
the Flink Kafka connector. Any Flink connector comes after the new minor
version, similar to how it has been for all other connectors with
After digging into the flink-python code, It seems if
`PYFLINK_GATEWAY_DISABLED` is set to false in an environment variable, then
using Types.LIST(Types.ROW([...])) does not have any issue, once Java
Gateway is launched.
It was unexpected for Flink local run to set this flag to false explicitly.
Hi,
I have opened a draft PR [1] that shows the minimal required changes and a
suggested unit test setup for Java version specific tests.
There is still some work to be done (run all benchmarks, add more tests for
compatibility/migration)
If you have time please review / comment on the approach
If there is no more questions or concerns, I will start the voting thread
tomorrow
On 2022/06/27 13:09:51 Roc Marshal wrote:
> Hi, all,
>
>
>
>
> I would like to open a discussion on porting JDBC Source to new Source API
> (FLIP-27[1]).
>
> Martijn Visser, Jing Ge and I had a preliminary
Hi Team,
In my previous email[1] I have described our challenges migrating the
existing Iceberg SinkFunction based implementation, to the new SinkV2 based
implementation.
As a result of the discussion around that topic, I have created the
FLIP-371 [2] to address the Committer related changes,
Thanks for the efforts Peter!
I've just analyzed it through and I think it's useful feature.
+1 from my side.
G
On Thu, Oct 5, 2023 at 12:35 PM Péter Váry
wrote:
> For the record, after the rename, the new FLIP link is:
>
>
For the record, after the rename, the new FLIP link is:
https://cwiki.apache.org/confluence/display/FLINK/FLIP-371%3A+Provide+initialization+context+for+Committer+creation+in+TwoPhaseCommittingSink
Thanks,
Peter
Péter Váry ezt írta (időpont: 2023. okt. 5.,
Cs, 11:02):
> Thanks Gordon for the
Hi, Zhu Zhu,
Thanks for your feedback!
> I think we can introduce a new config option
> `taskmanager.load-balance.mode`,
> which accepts "None"/"Slots"/"Tasks". `cluster.evenly-spread-out-slots`
> can be superseded by the "Slots" mode and get deprecated. In the future
> it can support more mode,
Hi Jing,
Yes I agree that if we can get them resolved then that would be ideal.
I guess the worry is that at 1.17, we had a released Flink core and Kafka
connector.
At 1.18 we will have a released Core Flink but no new Kafka connector. So the
last released Kafka connector would now be
Jing Ge created FLINK-33195:
---
Summary: ElasticSearch Connector should directly depend on
3rd-party libs instead of flink-shaded repo
Key: FLINK-33195
URL: https://issues.apache.org/jira/browse/FLINK-33195
Thanks Gordon for the comments!
1. I have changed the FLIP name to the one proposed by you.
2. In the Iceberg sink we need access only to the Flink metrics. We do
not specifically need the job ID in the Committer after the SinkV2
migration (more about that later). This is the reason
Hi Dawid,
Please don't get me wrong. I just described the facts, shared different
opinions, and tried to make sure we are on the same page. My intention is
clearly not to block your effort. If you, after hearing all the different
opinions, still think your solution is the right approach, please
Hi Chesnay,
Thanks for joining this discussion and sharing your thoughts!
> Connectors shouldn't depend on flink-shaded.
>
Perfect! We are on the same page. If you could read through the discussion,
you would realize that, currently, there are many connectors depend on
flink-shaded.
>
Thanks Peter for starting the FLIP.
Overall, this seems pretty straightforward and overdue, +1.
Two quick question / comments:
1. Can you rename the FLIP to something less generic? Perhaps "Provide
initialization context for Committer creation in TwoPhaseCommittingSink"?
2. Can you
Jing Ge created FLINK-33194:
---
Summary: AWS Connector should directly depend on 3rd-party libs
instead of flink-shaded repo
Key: FLINK-33194
URL: https://issues.apache.org/jira/browse/FLINK-33194
Project:
Jing Ge created FLINK-33193:
---
Summary: JDBC Connector should directly depend on 3rd-party libs
instead of flink-shaded repo
Key: FLINK-33193
URL: https://issues.apache.org/jira/browse/FLINK-33193
Project:
Hey Jing,
If you went through the discussion, you would see it has never been
shifted towards "ignore". The only concern in the discussion was we'd
have too many options and that lookup joins require them. It was never
questioned we should not throw an exception that was suggested in the
18 matches
Mail list logo