On Sat, Oct 30, 2021 at 7:56 PM Nir Soffer <[email protected]> wrote:
>
> Minimize reallocs by growing the backing array by factor of 1.5.
>
> Testing show that now append() is fast without calling reserve()
> upfront, simplifying code using vector.
>
> $ ./test-vector | grep bench
> bench_reserve: 1000000 appends in 0.003997 s
> bench_append: 1000000 appends in 0.003869 s
>
> In practice, this can make "nbdinfo --map" 10 milliseconds faster when
> running with image that have 500,000 extents.

In practice this is not very useful, since mapping an image with
1,000,000 extents
takes almost one second, and shaving 20 is not interesting.

$ hyperfine -w3 "/usr/local/bin/nbdinfo --map $URL --totals"
Benchmark 1: /usr/local/bin/nbdinfo --map
nbd+unix:///?socket=/tmp/nbd.sock --totals
  Time (mean ± σ):     965.2 ms ±  16.4 ms    [User: 13.1 ms, System: 5.7 ms]
  Range (min … max):   951.6 ms … 994.8 ms    10 runs

>
> Signed-off-by: Nir Soffer <[email protected]>
> ---
>  common/utils/vector.c | 14 ++++++++++++--
>  1 file changed, 12 insertions(+), 2 deletions(-)
>
> diff --git a/common/utils/vector.c b/common/utils/vector.c
> index 00cd254..7df17e1 100644
> --- a/common/utils/vector.c
> +++ b/common/utils/vector.c
> @@ -41,11 +41,21 @@ int
>  generic_vector_reserve (struct generic_vector *v, size_t n, size_t itemsize)
>  {
>    void *newptr;
> +  size_t reqalloc, newalloc;
>
> -  newptr = realloc (v->ptr, (n + v->alloc) * itemsize);
> +  reqalloc = v->alloc + n;
> +  if (reqalloc < v->alloc)
> +    return -1; /* overflow */
> +
> +  newalloc = (v->alloc * 3 + 1) / 2;
> +
> +  if (newalloc < reqalloc)
> +    newalloc = reqalloc;
> +
> +  newptr = realloc (v->ptr, newalloc * itemsize);
>    if (newptr == NULL)
>      return -1;
>    v->ptr = newptr;
> -  v->alloc += n;
> +  v->alloc = newalloc;
>    return 0;
>  }
> --
> 2.31.1
>


_______________________________________________
Libguestfs mailing list
[email protected]
https://listman.redhat.com/mailman/listinfo/libguestfs

Reply via email to