+1
The output looks great! Looking forward to it.

On Mon, Aug 21, 2023 at 9:39 AM Hang Chen <chenh...@apache.org> wrote:

> +1
>
> Looking forward to this feature.
>
> Thanks,
> Hang
>
> Enrico Olivelli <eolive...@gmail.com> 于2023年8月20日周日 21:24写道:
> >
> > Hello,
> > The proposal is well written, I have no questions.
> > We have been waiting for this feature for long time.
> >
> > I am supporting it
> >
> > Thanks
> > Enrico
> >
> > Il Mar 15 Ago 2023, 04:28 horizonzy <horizo...@apache.org> ha scritto:
> >
> > > Hi, everyone:
> > > There is a proposal about batched reading(
> > > https://github.com/apache/bookkeeper/pull/4051), to introduce it to
> > > improve
> > > read performance.
> > >
> > > The objective of this proposal is to enhance the performance of entry
> > > reading by introducing a batch entry reading protocol that takes into
> > > account the expected count and size of entries.
> > > 1. Optimize entry reading performance: By reading multiple entries in a
> > > single RPC request, the network communication and RPC call
> > > overhead can be reduced, thereby optimizing the reading performance.
> > > 2. Minimize CPU resource consumption: The aggregation of multiple
> entries
> > > into a single RPC request can help in reducing the number of requests
> and
> > > responses, which in turn can lower the CPU resource consumption.
> > > 3. Streamline client code: The ability to read entries based on the
> > > anticipated count or size, such as Apache Pulsar's approach of
> calculating
> > > the start and end entry IDs for each read request based on the average
> size
> > > of past entries, can add unnecessary complexity to the implementation
> and
> > > can't guarantee reliable behavioral outcomes.
> > >
> > > Here is the output of the BookKeeper perf tool with ensemble=1,
> write=1,
> > > and ack=1.
> > > Batch(100): Read 1000100 entries in 8904ms
> > > Batch(500): Read 1000500 entries in 12182ms
> > > Non-Batch: Read 1000130 entries in 199928ms
> > >
> > > If you have any questions, feel free to talk about it. Thanks!
> > >
>

Reply via email to