Depending on the implementation of the allocator used, calloc may not have
any additional overhead. When it does, there are really only a couple cases
it shouldn't happen on modern systems:

* If an object is allocated and freed in a tight loop. This shouldn't be
happening anyway -- reuse / pool objects with this sort of access pattern.
* If the object is large. Malloced memory should not be immediately visible
to multiple concurrent processes, and objects that consist of only a few
cache lines cost very little to zero on modern processors.

It may be worth auditing for these situations, but I've done extensive
profiling of extremely memory heavy workloads (hundreds of gb) in Varnish
over the past few years, and I promise that calloc is not a current
limiting factor in terms of latency or throughput.

Of course, if you're concerned about swapping, I'd also argue that your
cache is not properly sized.

On Tue, Mar 8, 2016, 04:28 Poul-Henning Kamp <[email protected]> wrote:

> --------
> In message <CAJV_h0YVCRfTOFk=
> [email protected]>
> , Federico Schwindt writes:
>
> >We use calloc in many places, I do wonder how many of them do really need
> >it. The downside of using calloc when is not really needed is that by
> >zeroing the memory you end up with resident memory and not virtual, which
> >in turn might lead to swapping.
>
> This is almost always intentional, as we generally do not over-allocate.
>
> The exception is the malloc stevedore where we do.
>
>
> --
> Poul-Henning Kamp       | UNIX since Zilog Zeus 3.20
> [email protected]         | TCP/IP since RFC 956
> FreeBSD committer       | BSD since 4.3-tahoe
> Never attribute to malice what can adequately be explained by incompetence.
>
> _______________________________________________
> varnish-dev mailing list
> [email protected]
> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-dev
>
_______________________________________________
varnish-dev mailing list
[email protected]
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-dev

Reply via email to