> probably not a big problem, if the plan is to eventually make
> > this
> > > > > > feature
> > > > > > > default and remove the configuration option.
> > > > > > >
> > > > > >
> > > > > > I guess we both agree that it won't be a problem in the long
he logic for deciding which
> > > path
> > > > to
> > > > > > be used. Admittedly, this is not super expensive, but still worth
> > > > > > comparison with the benefit.
> > > > > >
> > > > >
> > > >
t; supposed to tell users that "Your Flink job performance can be
> > > considerably
> > > > optimal if your job's average record size is too small".
> > > >
> > > >
> > > > > 5. The benefit is limited.
> > > > > a.
t on performance. But if the latencyTrackingInterval is
> > > > > configured to be relatively large, such as 10s, this impact can be
> > > > ignored.
> > > > > >
> > > > > >
> > > > > >
> > > > > > --
> > > > > >
> > > > > > Best
gt; > > Xintong
> > >
> > >
> > >
> > > On Mon, Aug 7, 2023 at 4:24 PM Yunfeng Zhou <
> flink.zhouyunf...@gmail.com
> > >
> > > wrote:
> > >
> > > > Hi Matt,
> > > >
> > > > Thanks for letting me learn about usages of latency markers
ainst TPC-DS and
> > > > hope that it could cover the common use cases that you are concerned
> > > > about. I believe there would still be performance improvement when
> the
> > > > size of each StreamRecord increases, though the improvement will not
> > &
monitor the end-to-end latency of flink
> jobs.
> > If the latencyTrackingInterval is set too small(like 5ms), it will have a
> > large impact on performance. But if the latencyTrackingInterval is
> > configured to be relatively large, such as 10s, this impact can be
> ignored.
> &g
itor the end-to-end latency of flink jobs.
> > If the latencyTrackingInterval is set too small(like 5ms), it will have a
> > large impact on performance. But if the latencyTrackingInterval is
> > configured to be relatively large, such as 10s, this impact can be ignored.
> &g
But if the latencyTrackingInterval is
> configured to be relatively large, such as 10s, this impact can be ignored.
> >
> >
> >
> > --
> >
> > Best,
> > Matt Wang
> >
> >
> > ---- Replied Message
> > | From | Yunfeng Zhou |
&
sage
> | From | Yunfeng Zhou |
> | Date | 07/14/2023 20:30 |
> | To | |
> | Subject | Re: [DISCUSS] FLIP-330: Support specifying record timestamp
> requirement |
> Hi Matt,
>
> 1. I tried to add back the tag serialization process back to my POC
> code and run the ben
y
> > limit
> > > the usage scenarios. Whether the solution design can retain the
> > capability
> > > of the latency marker;
> > > 3. The data of the POC test is of long type. Here I want to see how much
> > > profit it will have if it is a string with a length o
marker;
> > 3. The data of the POC test is of long type. Here I want to see how much
> > profit it will have if it is a string with a length of 100B or 1KB.
> >
> >
> > --
> >
> > Best,
> > Matt Wang
> >
> >
> > Replied Message
>
ee how much
> profit it will have if it is a string with a length of 100B or 1KB.
>
>
> --
>
> Best,
> Matt Wang
>
>
> Replied Message
> | From | Yunfeng Zhou |
> | Date | 07/13/2023 14:52 |
> | To | |
> | Subject | Re: [DISCUSS] FLIP-330
. But if the latencyTrackingInterval is configured to be
relatively large, such as 10s, this impact can be ignored.
--
Best,
Matt Wang
Replied Message
| From | Yunfeng Zhou |
| Date | 07/14/2023 20:30 |
| To | |
| Subject | Re: [DISCUSS] FLIP-330: Support specifying record timestamp
requirement
-- Replied Message
> | From | Yunfeng Zhou |
> | Date | 07/13/2023 14:52 |
> | To | |
> | Subject | Re: [DISCUSS] FLIP-330: Support specifying record timestamp
> requirement |
> Hi Jing,
>
> Thanks for reviewing this FLIP.
>
> 1. I did change the names of som
how much profit
it will have if it is a string with a length of 100B or 1KB.
--
Best,
Matt Wang
Replied Message
| From | Yunfeng Zhou |
| Date | 07/13/2023 14:52 |
| To | |
| Subject | Re: [DISCUSS] FLIP-330: Support specifying record timestamp
requirement |
Hi Jing,
Thanks
Hi Jing,
Thanks for reviewing this FLIP.
1. I did change the names of some APIs in the FLIP compared with the
original version according to which I implemented the POC. As the core
optimization logic remains the same and the POC's performance can
still reflect the current FLIP's expected
Hi Yunfeng,
Thanks for the proposal. It makes sense to offer the optimization. I got
some NIT questions.
1. I guess you changed your thoughts while coding the POC, I found
pipeline.enable-operator-timestamp in the code but is
pipeline.force-timestamp-support defined in the FLIP
2. about the
Hi all,
Dong(cc'ed) and I are opening this thread to discuss our proposal to
support optimizing StreamRecord's serialization performance.
Currently, a StreamRecord would be converted into a 1-byte tag (+
8-byte timestamp) + N-byte serialized value during the serialization
process. In scenarios
19 matches
Mail list logo