Re: [Gluster-devel] GF_CALLOC to GF_MALLOC conversion - is it safe?

2019-03-21 Thread Yaniv Kaul
On Thu, Mar 21, 2019 at 12:45 PM Atin Mukherjee  wrote:

> All,
>
> In the last few releases of glusterfs, with stability as a primary theme
> of the releases, there has been lots of changes done on the code
> optimization with an expectation that such changes will have gluster to
> provide better performance. While many of these changes do help, but off
> late we have started seeing some diverse effects of them, one especially
> being the calloc to malloc conversions. While I do understand that malloc
> syscall will eliminate the extra memset bottleneck which calloc bears, but
> with recent kernels having in-built strong compiler optimizations I am not
> sure whether that makes any significant difference, but as I mentioned
> earlier certainly if this isn't done carefully it can potentially introduce
> lot of bugs and I'm writing this email to share one of such experiences.
>
> Sanju & I were having troubles for last two days to figure out why
> https://review.gluster.org/#/c/glusterfs/+/22388/ wasn't working in
> Sanju's system but it had no problems running the same fix in my gluster
> containers. After spending a significant amount of time, what we now
> figured out is that a malloc call [1] (which was a calloc earlier) is the
> culprit here. As you all can see, in this function we allocate txn_id and
> copy the event->txn_id into it through gf_uuid_copy () . But when we were
> debugging this step wise through gdb, txn_id wasn't exactly copied with the
> exact event->txn_id and it had some junk values which made the
> glusterd_clear_txn_opinfo to be invoked with a wrong txn_id later on
> resulting the leaks to remain the same which was the original intention of
> the fix.
>

- I'm not sure I understand what 'wasn't exactly copied' means? It either
copied or did not copy the event->txn_id ? Is event->txn_id not fully
populated somehow?
- This is a regression caused by 81cbbfd1d870bea49b8aafe7bebb9e8251190918
which I introduced in August 4th, and we are just now discovering it. This
is not good.
Without looking, I assume almost all CALLOC->MALLOC changes are done on
positive paths of the code, which means it's not tested well.
This file, while having a low code coverage, seems to be covered[1], so I'm
not sure how we are finding this now?

>
> This was quite painful to debug and we had to spend some time to figure
> this out. Considering we have converted many such calls in past, I'd urge
> that we review all such conversions and see if there're any side effects to
> it. Otherwise we might end up running into many potential memory related
> bugs later on. OTOH, going forward I'd request every patch
> owners/maintainers to pay some special attention to these conversions and
> see they are really beneficial and error free. IMO, general guideline
> should be - for bigger buffers, malloc would make better sense but has to
> be done carefully, for smaller size, we stick to calloc.
>
> What do others think about it?
>

I think I might have been aggressive with the changes, but I do feel they
are important in some areas where it makes sense. For example:
libglusterfs/src/inode.c :
 new->inode_hash = (void *)GF_CALLOC(*65536, sizeof(struct list_head)*,
gf_common_mt_list_head);
if (!new->inode_hash)
goto out;

new->name_hash = (void *)GF_CALLOC(*new->hashsize, sizeof(struct
list_head)*,
   gf_common_mt_list_head);
if (!new->name_hash)
goto out;


And just few lines later:
for (i = 0; i < *65536*; i++) {
INIT_LIST_HEAD(>inode_hash[i]);
}

for (i = 0; i < *new->hashsize*; i++) {
INIT_LIST_HEAD(>name_hash[i]);
}


So this is really a waste of cycles for no good reason. I agree not every
CALLOC is worth converting.

One more note, I'd love to be able to measure the effect. But there's no CI
job with benchmarks, inc. CPU and memory consumption, which we can evaluate
the changes.

And lastly, we need better performance. We need better scalability. We are
not keeping up with HW advancements (especially NVMe, pmem and such) and
(just like other storage stacks!) becoming somewhat of a performance
bottleneck.
Y.

>
> [1]
> https://github.com/gluster/glusterfs/blob/master/xlators/mgmt/glusterd/src/glusterd-op-sm.c#L5681
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] GF_CALLOC to GF_MALLOC conversion - is it safe?

2019-03-21 Thread Atin Mukherjee
All,

In the last few releases of glusterfs, with stability as a primary theme of
the releases, there has been lots of changes done on the code optimization
with an expectation that such changes will have gluster to provide better
performance. While many of these changes do help, but off late we have
started seeing some diverse effects of them, one especially being the
calloc to malloc conversions. While I do understand that malloc syscall
will eliminate the extra memset bottleneck which calloc bears, but with
recent kernels having in-built strong compiler optimizations I am not sure
whether that makes any significant difference, but as I mentioned earlier
certainly if this isn't done carefully it can potentially introduce lot of
bugs and I'm writing this email to share one of such experiences.

Sanju & I were having troubles for last two days to figure out why
https://review.gluster.org/#/c/glusterfs/+/22388/ wasn't working in Sanju's
system but it had no problems running the same fix in my gluster
containers. After spending a significant amount of time, what we now
figured out is that a malloc call [1] (which was a calloc earlier) is the
culprit here. As you all can see, in this function we allocate txn_id and
copy the event->txn_id into it through gf_uuid_copy () . But when we were
debugging this step wise through gdb, txn_id wasn't exactly copied with the
exact event->txn_id and it had some junk values which made the
glusterd_clear_txn_opinfo to be invoked with a wrong txn_id later on
resulting the leaks to remain the same which was the original intention of
the fix.

This was quite painful to debug and we had to spend some time to figure
this out. Considering we have converted many such calls in past, I'd urge
that we review all such conversions and see if there're any side effects to
it. Otherwise we might end up running into many potential memory related
bugs later on. OTOH, going forward I'd request every patch
owners/maintainers to pay some special attention to these conversions and
see they are really beneficial and error free. IMO, general guideline
should be - for bigger buffers, malloc would make better sense but has to
be done carefully, for smaller size, we stick to calloc.

What do others think about it?

[1]
https://github.com/gluster/glusterfs/blob/master/xlators/mgmt/glusterd/src/glusterd-op-sm.c#L5681
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel