Thanks, Yanquan. We can also introduce some restart/recovery strategies on
how to handle MDS unavailability.

On Sat, Apr 5, 2025 at 10:57 PM Yanquan Lv <decq12y...@gmail.com> wrote:

> Your explanation is helpful.
>
> For question3, I think the key lies in whether the situation where MDS is
> unavailable is considered permissible, as subsequent updates to
> DynamicKafkaWriteState depend on this component.
> I approve of the usage of state if it is acceptable (for fault tolerance
> reasons). And we need to remind users to check the status of MDS (add logs,
> depending on the implementation) to avoid subsequent unrecoverable
> failures.
>
>
> Őrhidi Mátyás <matyas.orh...@gmail.com> 于2025年4月6日周日 04:11写道:
>
> > 1. Yes, we can add Table API support in the future
> > 2. The failover is coordinated by MDS itself
> > 3. State is there to support situations where MDS is not accessible, it
> > stores the cached metadata in the state
> >
> > I hope it clarifies,
> > Matyas
> >
> > On Sat, Apr 5, 2025 at 9:55 AM Yanquan Lv <decq12y...@gmail.com> wrote:
> >
> > > Hi, Matyas. Thanks for driving this.
> > > I have some questions and I hope you can help explain them:
> > >
> > > 1. The dynamic Kafka Sink looks like only be used in DataStream jobs
> > > currently, will it be considered to be supported in TableAPI in the
> > future?
> > > 2. Even though we have Kafka MetaData Service, due to the lag of Kafka
> > > MetaData Service, we still encounter situations where the target
> cluster
> > is
> > > unavailable when calling the SinkWriter.write() method. Therefore, a
> > > failover seems inevitable, but we can run normally after failover. Is
> my
> > > understanding correct?
> > > 3. Because we only support at least once, we do not need to save
> > > transaction information. I still have doubts about the necessity of
> > > DynamicKafkaWriteState. When a job fails, can we directly build this
> > > streamDataMap information from Kafka MetaData Service, or is it because
> > our
> > > Kafka MetaData Service has not completed initialization or we cannot
> > > directly use the latest information obtained from Kafka MetaData
> Service?
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > > Őrhidi Mátyás <matyas.orh...@gmail.com> 于2025年3月14日周五 05:04写道:
> > >
> > > > Hi devs,
> > > >
> > > > I'd like to start a discussion on FLIP-515: Dynamic Kafka Sink [1].
> > This
> > > is
> > > > an addition to the existing Dynamic Kafka Source [2] to make the
> > > > functionality complete.
> > > >
> > > > Feel free to share your thoughts and suggestions to make this feature
> > > > better.
> > > >
> > > > + Mason Chen
> > > >
> > > > Thanks,
> > > > Matyas
> > > >
> > > > [1]
> > > >
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-515%3A+Dynamic+Kafka+Sink
> > > >
> > > > [2]
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=217389320
> > > >
> > >
> >
>

Reply via email to