[Gluster-devel] Can we call glfs_fini() when glfs_init() failed? (a.k.a. core generated in master/3.7)
I'm able to hit a crash in master and 3.7 with the steps described in this BZ https://bugzilla.redhat.com/show_bug.cgi?id=1220996#c0 In glfs-heal.c and in other examples, we call glfs_fini (fs) if 'fs' is valid, irrespective of whether glfs_init() is successful or not. Is it okay to do this? I also found that even though glfs_init() returns -1 when the volume is not started, fs->init==1. Thanks, Ravi ___ Gluster-devel mailing list Gluster-devel@gluster.org http://www.gluster.org/mailman/listinfo/gluster-devel
[Gluster-devel] Canceled Event: Gluster Community Weekly Meeting @ Wed May 13, 2015 5:30pm - 6:30pm (vbel...@redhat.com)
BEGIN:VCALENDAR PRODID:-//Google Inc//Google Calendar 70.9054//EN VERSION:2.0 CALSCALE:GREGORIAN METHOD:CANCEL BEGIN:VTIMEZONE TZID:Asia/Calcutta X-LIC-LOCATION:Asia/Calcutta BEGIN:STANDARD TZOFFSETFROM:+0530 TZOFFSETTO:+0530 TZNAME:IST DTSTART:19700101T00 END:STANDARD END:VTIMEZONE BEGIN:VEVENT DTSTART;TZID=Asia/Calcutta:20150513T173000 DTEND;TZID=Asia/Calcutta:20150513T183000 DTSTAMP:20150512T151203Z ORGANIZER;CN=Vijay Bellur:mailto:vbel...@redhat.com UID:7b49ac44-ffca-4de4-a179-f9edde80bf1e ATTENDEE;CUTYPE=INDIVIDUAL;ROLE=REQ-PARTICIPANT;PARTSTAT=NEEDS-ACTION;CN=js huc...@redhat.com;X-NUM-GUESTS=0:mailto:jshuc...@redhat.com ATTENDEE;CUTYPE=INDIVIDUAL;ROLE=REQ-PARTICIPANT;PARTSTAT=NEEDS-ACTION;CN=lp a...@redhat.com;X-NUM-GUESTS=0:mailto:lpa...@redhat.com ATTENDEE;CUTYPE=INDIVIDUAL;ROLE=REQ-PARTICIPANT;PARTSTAT=NEEDS-ACTION;CN=bj orn.sko...@basefarm.com;X-NUM-GUESTS=0:mailto:bjorn.sko...@basefarm.com ATTENDEE;CUTYPE=INDIVIDUAL;ROLE=REQ-PARTICIPANT;PARTSTAT=NEEDS-ACTION;CN=rw hee...@redhat.com;X-NUM-GUESTS=0:mailto:rwhee...@redhat.com ATTENDEE;CUTYPE=INDIVIDUAL;ROLE=REQ-PARTICIPANT;PARTSTAT=NEEDS-ACTION;CN=ow e...@heha.org;X-NUM-GUESTS=0:mailto:o...@heha.org ATTENDEE;CUTYPE=INDIVIDUAL;ROLE=REQ-PARTICIPANT;PARTSTAT=NEEDS-ACTION;CN=ji m...@redhat.com;X-NUM-GUESTS=0:mailto:jim...@redhat.com ATTENDEE;CUTYPE=INDIVIDUAL;ROLE=REQ-PARTICIPANT;PARTSTAT=NEEDS-ACTION;CN=pp ort...@redhat.com;X-NUM-GUESTS=0:mailto:pport...@redhat.com ATTENDEE;CUTYPE=INDIVIDUAL;ROLE=REQ-PARTICIPANT;PARTSTAT=NEEDS-ACTION;CN=bt ur...@redhat.com;X-NUM-GUESTS=0:mailto:btur...@redhat.com ATTENDEE;CUTYPE=INDIVIDUAL;ROLE=REQ-PARTICIPANT;PARTSTAT=NEEDS-ACTION;CN=jc l...@redhat.com;X-NUM-GUESTS=0:mailto:jcl...@redhat.com ATTENDEE;CUTYPE=INDIVIDUAL;ROLE=REQ-PARTICIPANT;PARTSTAT=NEEDS-ACTION;CN=nb ala...@redhat.com;X-NUM-GUESTS=0:mailto:nbala...@redhat.com ATTENDEE;CUTYPE=INDIVIDUAL;ROLE=REQ-PARTICIPANT;PARTSTAT=ACCEPTED;CN=Vijay Bellur;X-NUM-GUESTS=0:mailto:vbel...@redhat.com ATTENDEE;CUTYPE=INDIVIDUAL;ROLE=REQ-PARTICIPANT;PARTSTAT=NEEDS-ACTION;CN=nd e...@redhat.com;X-NUM-GUESTS=0:mailto:nde...@redhat.com ATTENDEE;CUTYPE=INDIVIDUAL;ROLE=REQ-PARTICIPANT;PARTSTAT=NEEDS-ACTION;CN=kd han...@redhat.com;X-NUM-GUESTS=0:mailto:kdhan...@redhat.com ATTENDEE;CUTYPE=INDIVIDUAL;ROLE=REQ-PARTICIPANT;PARTSTAT=NEEDS-ACTION;CN=gl uster-de...@gluster.org;X-NUM-GUESTS=0:mailto:gluster-devel@gluster.org ATTENDEE;CUTYPE=INDIVIDUAL;ROLE=REQ-PARTICIPANT;PARTSTAT=NEEDS-ACTION;CN=va h...@doruk.net.tr;X-NUM-GUESTS=0:mailto:vah...@doruk.net.tr ATTENDEE;CUTYPE=INDIVIDUAL;ROLE=REQ-PARTICIPANT;PARTSTAT=NEEDS-ACTION;CN=rt a...@redhat.com;X-NUM-GUESTS=0:mailto:rta...@redhat.com ATTENDEE;CUTYPE=INDIVIDUAL;ROLE=REQ-PARTICIPANT;PARTSTAT=NEEDS-ACTION;CN=nm aje...@suntradingllc.com;X-NUM-GUESTS=0:mailto:nmaje...@suntradingllc.com ATTENDEE;CUTYPE=INDIVIDUAL;ROLE=REQ-PARTICIPANT;PARTSTAT=NEEDS-ACTION;CN=kk eit...@redhat.com;X-NUM-GUESTS=0:mailto:kkeit...@redhat.com ATTENDEE;CUTYPE=INDIVIDUAL;ROLE=REQ-PARTICIPANT;PARTSTAT=NEEDS-ACTION;CN=dl amb...@redhat.com;X-NUM-GUESTS=0:mailto:dlamb...@redhat.com ATTENDEE;CUTYPE=INDIVIDUAL;ROLE=REQ-PARTICIPANT;PARTSTAT=NEEDS-ACTION;CN=gl uster-us...@gluster.org;X-NUM-GUESTS=0:mailto:gluster-us...@gluster.org RECURRENCE-ID;TZID=Asia/Calcutta:20150513T173000 CREATED:20150424T220745Z DESCRIPTION:Greetings\,\n\nThis is the new weekly slot to discuss all aspec ts concerning the Gluster community. \n\nAgenda - https://public.pad.fsfe.o rg/p/gluster-community-meetings \n\nPlease feel free to add your agenda ite ms before the meeting. \n\nCheers\, \nVijay LAST-MODIFIED:20150512T151202Z LOCATION:#gluster-meeting on irc.freenode.net SEQUENCE:1 STATUS:CANCELLED SUMMARY:Gluster Community Weekly Meeting TRANSP:OPAQUE END:VEVENT END:VCALENDAR invite.ics Description: application/ics ___ Gluster-devel mailing list Gluster-devel@gluster.org http://www.gluster.org/mailman/listinfo/gluster-devel
[Gluster-devel] [Update] Gluster Community Weekly Meeting
Cancelling this week due to ongoing Gluster Summit. You have been invited to the following event. Title: Gluster Community Weekly Meeting Greetings, This is the new weekly slot to discuss all aspects concerning the Gluster community. Agenda - https://public.pad.fsfe.org/p/gluster-community-meetings Please feel free to add your agenda items before the meeting. Cheers, Vijay When: Wed May 13, 2015 5:30pm - 6:30pm India Standard Time Where: #gluster-meeting on irc.freenode.net Who: * Vijay Bellur - organizer * nde...@redhat.com * jim...@redhat.com * dlamb...@redhat.com * pport...@redhat.com * jcl...@redhat.com * gluster-us...@gluster.org * nmaje...@suntradingllc.com * o...@heha.org * btur...@redhat.com * kdhan...@redhat.com * rwhee...@redhat.com * jshuc...@redhat.com * nbala...@redhat.com * lpa...@redhat.com * bjorn.sko...@basefarm.com * gluster-devel@gluster.org * vah...@doruk.net.tr * rta...@redhat.com * kkeit...@redhat.com ___ Gluster-devel mailing list Gluster-devel@gluster.org http://www.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] Issue with THIS and libgfapi
On 05/12/2015 07:36 PM, Poornima Gurusiddaiah wrote: Hi, We recently uncovered an issue with THIS and libgfapi, it can be generalized to any process having multiple glusterfs_ctxs. Before the master xlator (fuse/libgfapi) is created, all the code that access THIS will be using global_xlator object, defined globally for the whole of the process. The problem is when multiple threads start modifying THIS, and overwrite thr global_xlators' ctx eg: glfs_new: glfs_new () { ... ctx = glusterfs_ctx_new(); glusterfs_globals_inti(); THIS = NULL; /* implies THIS = &global_xlator */ THIS->ctx = ctx; ... } The issue is more severe than it appears, as the other threads like epoll, timer, sigwaiter, when not executing in fop context will always refer to the global_xlator and global_xlator->ctx. Because of the probable race condition explained above we may be referring to the stale ctxs and could lead to crashes. Probable solution: Currently THIS is thread specific, but the global xlator object it modifies is global to all threads!! The obvious association would be to have global_xlator per ctx instead of per process. The changes would be as follows: - Have a new global_xlator object in glusterfs_ctx. - After every creation of new ctx assign THIS = new_ctx->global_xlator - But how to set the THIS in every thread(epoll, timer etc) that gets created as a part of that ctx. Replace all the pthread_create for the ctx threads, with gf_pthread_create: gf_pthread_create (fn,..., ctx) { ... thr_id = pthread_create (global_thread_init, fn, ctx...); ... } global_thread_init (fn, ctx, args) { THIS = ctx->global_xlator; fn(args); } The other solution would be to not associate threads with ctx, instead shared among ctxs Please let me know your thoughts on the same. Regards, Poornima Hi Poornima, Recently with glusterfs-3.7 beta1 rpms, while create VM Image using qemu-img, I see the following errors : [2015-05-08 09:04:14.358896] E [rpc-transport.c:512:rpc_transport_unref] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x186)[0x7f51f6bb6516] (--> /lib64/libgfrpc.so.0(rpc_transport_unref+0xa3)[0x7f51f965e493] (--> /lib64/libgfrpc.so.0(rpc_clnt_unref+0x5c)[0x7f51f96617dc] (--> /lib64/libglusterfs.so.0(+0x1edc1)[0x7f51f6bb2dc1] (--> /lib64/libglusterfs.so.0(+0x1ed55)[0x7f51f6bb2d55] ) 0-rpc_transport: invalid argument: this [2015-05-08 09:04:14.359085] E [rpc-transport.c:512:rpc_transport_unref] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x186)[0x7f51f6bb6516] (--> /lib64/libgfrpc.so.0(rpc_transport_unref+0xa3)[0x7f51f965e493] (--> /lib64/libgfrpc.so.0(rpc_clnt_unref+0x5c)[0x7f51f96617dc] (--> /lib64/libglusterfs.so.0(+0x1edc1)[0x7f51f6bb2dc1] (--> /lib64/libglusterfs.so.0(+0x1ed55)[0x7f51f6bb2d55] ) 0-rpc_transport: invalid argument: this [2015-05-08 09:04:14.359241] E [rpc-transport.c:512:rpc_transport_unref] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x186)[0x7f51f6bb6516] (--> /lib64/libgfrpc.so.0(rpc_transport_unref+0xa3)[0x7f51f965e493] (--> /lib64/libgfrpc.so.0(rpc_clnt_unref+0x5c)[0x7f51f96617dc] (--> /lib64/libglusterfs.so.0(+0x1edc1)[0x7f51f6bb2dc1] (--> /lib64/libglusterfs.so.0(+0x1ed55)[0x7f51f6bb2d55] ) 0-rpc_transport: invalid argument: this Is this the consequence of the issue that you are talking about ? -- Satheesaran ___ Gluster-devel mailing list Gluster-devel@gluster.org http://www.gluster.org/mailman/listinfo/gluster-devel
[Gluster-devel] Issue with THIS and libgfapi
Hi, We recently uncovered an issue with THIS and libgfapi, it can be generalized to any process having multiple glusterfs_ctxs. Before the master xlator (fuse/libgfapi) is created, all the code that access THIS will be using global_xlator object, defined globally for the whole of the process. The problem is when multiple threads start modifying THIS, and overwrite thr global_xlators' ctx eg: glfs_new: glfs_new () { ... ctx = glusterfs_ctx_new(); glusterfs_globals_inti(); THIS = NULL; /* implies THIS = &global_xlator */ THIS->ctx = ctx; ... } The issue is more severe than it appears, as the other threads like epoll, timer, sigwaiter, when not executing in fop context will always refer to the global_xlator and global_xlator->ctx. Because of the probable race condition explained above we may be referring to the stale ctxs and could lead to crashes. Probable solution: Currently THIS is thread specific, but the global xlator object it modifies is global to all threads!! The obvious association would be to have global_xlator per ctx instead of per process. The changes would be as follows: - Have a new global_xlator object in glusterfs_ctx. - After every creation of new ctx assign THIS = new_ctx->global_xlator - But how to set the THIS in every thread(epoll, timer etc) that gets created as a part of that ctx. Replace all the pthread_create for the ctx threads, with gf_pthread_create: gf_pthread_create (fn,..., ctx) { ... thr_id = pthread_create (global_thread_init, fn, ctx...); ... } global_thread_init (fn, ctx, args) { THIS = ctx->global_xlator; fn(args); } The other solution would be to not associate threads with ctx, instead shared among ctxs Please let me know your thoughts on the same. Regards, Poornima ___ Gluster-devel mailing list Gluster-devel@gluster.org http://www.gluster.org/mailman/listinfo/gluster-devel
[Gluster-devel] No Bug Triage meeting today
Many of the Gluster Developers and contributors to the weekly Gluster Bug Triage meeting are at the Gluster Summit in Barcelona. We will not be hosting the Gluster Bug Triage meeting today. Let this not stop you to triage bugs that the maintainers missed. Please use the URLs that are in the agenda and triage the bugs: https://public.pad.fsfe.org/p/gluster-bug-triage Many thanks, Niels (and all people that reported bugs) ___ Gluster-devel mailing list Gluster-devel@gluster.org http://www.gluster.org/mailman/listinfo/gluster-devel
[Gluster-devel] Etherpad for the Gluster Summit 2015
We have just started the Gluster Summit 2015 in Barcelona this morning. It would be great if attendees can capture their notes in the following etherpad: https://public.pad.fsfe.org/p/gluster-summit-2015 Later on, presentations will be shared, and blog posts will occur on all kusual places. Thanks, Niels ___ Gluster-devel mailing list Gluster-devel@gluster.org http://www.gluster.org/mailman/listinfo/gluster-devel