On Fri, Aug 10, 2018 at 11:21 AM, Pranith Kumar Karampuri <
[email protected]> wrote:

>
>
> On Fri, Aug 10, 2018 at 8:54 AM Raghavendra Gowdappa <[email protected]>
> wrote:
>
>> All,
>>
>> Details can be found at:
>> https://build.gluster.org/job/centos7-regression/2190/console
>>
>> Process that core dumped: glfs_shdheal
>>
>> Note that the patch on which this regression failures is on readdir-ahead
>> which is not loaded in xlator graph of self heal daemon.
>>
>> From bt,
>>
>> *23:53:24*         __FUNCTION__ = "syncop_getxattr"*23:53:24* #8  
>> 0x00007f5af8738aef in syncop_gfid_to_path_hard (itable=0x7f5ae401ce50, 
>> subvol=0x7f5ae40079e0, gfid=0x7f5adc00b4e8 "", inode=0x0, 
>> path_p=0x7f5acbffebe8, hard_resolve=false) at 
>> /home/jenkins/root/workspace/centos7-regression/libglusterfs/src/syncop-utils.c:585*23:53:24*
>>          ret = 0*23:53:24*         path = 0x0*23:53:24*         loc = {path 
>> = 0x0, name = 0x0, inode = 0x7f5ac00028a8, parent = 0x0, gfid = '\000' 
>> <repeats 15 times>, pargfid = '\000' <repeats 15 times>}*23:53:24*         
>> xattr = 0x0*23:53:24* #9  0x00007f5af8738c28 in syncop_gfid_to_path 
>> (itable=0x7f5ae401ce50, subvol=0x7f5ae40079e0, gfid=0x7f5adc00b4e8 "", 
>> path_p=0x7f5acbffebe8) at 
>> /home/jenkins/root/workspace/centos7-regression/libglusterfs/src/syncop-utils.c:636*23:53:24*
>>  No locals.
>> *23:53:24* #10 0x00007f5aeaad65e1 in afr_shd_selfheal 
>> (healer=0x7f5ae401d490, child=0, gfid=0x7f5adc00b4e8 "") at 
>> /home/jenkins/root/workspace/centos7-regression/xlators/cluster/afr/src/afr-self-heald.c:331*23:53:24*
>>          ret = 0*23:53:24*         eh = 0x0*23:53:24*         priv = 
>> 0x7f5ae401c780*23:53:24*         shd = 0x7f5ae401c8e8*23:53:24*         
>> shd_event = 0x0*23:53:24*         path = 0x0*23:53:24*         subvol = 
>> 0x7f5ae40079e0*23:53:24*         this = 0x7f5ae400d540*23:53:24*         
>> crawl_event = 0x7f5ae401d4a0*23:53:24* #11 0x00007f5aeaad6de5 in 
>> afr_shd_full_heal (subvol=0x7f5ae40079e0, entry=0x7f5adc00b440, 
>> parent=0x7f5acbffee20, data=0x7f5ae401d490) at 
>> /home/jenkins/root/workspace/centos7-regression/xlators/cluster/afr/src/afr-self-heald.c:541*23:53:24*
>>          healer = 0x7f5ae401d490*23:53:24*         this = 
>> 0x7f5ae400d540*23:53:24*         priv = 0x7f5ae401c780*23:53:24* #12 
>> 0x00007f5af8737b2f in syncop_ftw (subvol=0x7f5ae40079e0, loc=0x7f5acbffee20, 
>> pid=-6, data=0x7f5ae401d490, fn=0x7f5aeaad6d40 <afr_shd_full_heal>) at 
>> /home/jenkins/root/workspace/centos7-regression/libglusterfs/src/syncop-utils.c:123*23:53:24*
>>          child_loc = {path = 0x0, name = 0x0, inode = 0x0, parent = 0x0, 
>> gfid = '\000' <repeats 15 times>, pargfid = '\000' <repeats 15 
>> times>}*23:53:24*         fd = 0x7f5ac0001398
>>
>>
>> Assert for a non-null gfid failed in client_pre_getxattr_v2. From bt, it
>> looks like a NULL gfid was passed to afr_shd_full.
>>
>
> Most probably it is because of the change in gf_link_inode_from_dirent()
> in your patch. Why did you make that change? Wondering if we need to change
> afr/ec code accordingly.
>

Please hold on. I'll let you know whether changes in afr/ec are necessary.
I am thinking whether that change is really necessary.


>
>>
>> *23:53:24*         __PRETTY_FUNCTION__ = "client_pre_getxattr_v2"*23:53:24* 
>> #5  0x00007f5aeada8f2a in client4_0_getxattr (frame=0x7f5ac0008198, 
>> this=0x7f5ae40079e0, data=0x7f5acbffdcc0) at 
>> /home/jenkins/root/workspace/centos7-regression/xlators/protocol/client/src/client-rpc-fops_v2.c:4287*23:53:24*
>>          conf = 0x7f5ae40293e0*23:53:24*         args = 
>> 0x7f5acbffdcc0*23:53:24*         req = {gfid = '\000' <repeats 15 times>, 
>> namelen = 0, name = 0x0, xdata = {xdr_size = 0, count = 0, pairs = 
>> {pairs_len = 0, pairs_val = 0x0}}}*23:53:24*         dict = 0x0*23:53:24*    
>>      ret = 0*23:53:24*         op_ret = -1*23:53:24*         op_errno = 
>> 116*23:53:24*         local = 0x7f5ac00082a8*23:53:24*         __FUNCTION__ 
>> = "client4_0_getxattr"*23:53:24*         __PRETTY_FUNCTION__ = 
>> "client4_0_getxattr"
>>
>>
>> regards,
>> Raghavendra
>>
>
>
> --
> Pranith
>
_______________________________________________
Gluster-devel mailing list
[email protected]
https://lists.gluster.org/mailman/listinfo/gluster-devel

Reply via email to