[Gluster-devel] Updated invitation: Gluster Release-8: Planning @ Tue Jan 21, 2020 11:30am - 12:45pm (IST) (gluster-devel@gluster.org)
BEGIN:VCALENDAR PRODID:-//Google Inc//Google Calendar 70.9054//EN VERSION:2.0 CALSCALE:GREGORIAN METHOD:REQUEST BEGIN:VEVENT DTSTART:20200121T06Z DTEND:20200121T071500Z DTSTAMP:20200114T113520Z ORGANIZER;CN=a...@kadalu.io:mailto:a...@kadalu.io UID:6av0mbo57jmrv4t5cctsm5p...@google.com ATTENDEE;CUTYPE=INDIVIDUAL;ROLE=REQ-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP= TRUE;CN=jaher...@redhat.com;X-NUM-GUESTS=0:mailto:jaher...@redhat.com ATTENDEE;CUTYPE=INDIVIDUAL;ROLE=REQ-PARTICIPANT;PARTSTAT=ACCEPTED;RSVP=TRUE ;CN=srako...@redhat.com;X-NUM-GUESTS=0:mailto:srako...@redhat.com ATTENDEE;CUTYPE=INDIVIDUAL;ROLE=REQ-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP= TRUE;CN=sac...@gmail.com;X-NUM-GUESTS=0:mailto:sac...@gmail.com ATTENDEE;CUTYPE=INDIVIDUAL;ROLE=REQ-PARTICIPANT;PARTSTAT=TENTATIVE;RSVP=TRU E;CN=yk...@redhat.com;X-NUM-GUESTS=0:mailto:yk...@redhat.com ATTENDEE;CUTYPE=INDIVIDUAL;ROLE=REQ-PARTICIPANT;PARTSTAT=TENTATIVE;RSVP=TRU E;CN=nchil...@redhat.com;X-NUM-GUESTS=0:mailto:nchil...@redhat.com ATTENDEE;CUTYPE=INDIVIDUAL;ROLE=REQ-PARTICIPANT;PARTSTAT=ACCEPTED;RSVP=TRUE ;CN=ypa...@redhat.com;X-NUM-GUESTS=0:mailto:ypa...@redhat.com ATTENDEE;CUTYPE=INDIVIDUAL;ROLE=REQ-PARTICIPANT;PARTSTAT=ACCEPTED;RSVP=TRUE ;CN=ravishan...@redhat.com;X-NUM-GUESTS=0:mailto:ravishan...@redhat.com ATTENDEE;CUTYPE=INDIVIDUAL;ROLE=REQ-PARTICIPANT;PARTSTAT=ACCEPTED;RSVP=TRUE ;CN=aravi...@kadalu.io;X-NUM-GUESTS=0:mailto:aravi...@kadalu.io ATTENDEE;CUTYPE=INDIVIDUAL;ROLE=REQ-PARTICIPANT;PARTSTAT=ACCEPTED;RSVP=TRUE ;CN=hgowt...@redhat.com;X-NUM-GUESTS=0:mailto:hgowt...@redhat.com ATTENDEE;CUTYPE=INDIVIDUAL;ROLE=REQ-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP= TRUE;CN=gluster-devel@gluster.org;X-NUM-GUESTS=0:mailto:gluster-devel@glust er.org ATTENDEE;CUTYPE=INDIVIDUAL;ROLE=REQ-PARTICIPANT;PARTSTAT=ACCEPTED;RSVP=TRUE ;CN=dkhan...@redhat.com;X-NUM-GUESTS=0:mailto:dkhan...@redhat.com ATTENDEE;CUTYPE=INDIVIDUAL;ROLE=REQ-PARTICIPANT;PARTSTAT=ACCEPTED;RSVP=TRUE ;CN=a...@kadalu.io;X-NUM-GUESTS=0:mailto:a...@kadalu.io ATTENDEE;CUTYPE=INDIVIDUAL;ROLE=REQ-PARTICIPANT;PARTSTAT=ACCEPTED;RSVP=TRUE ;CN=sankars...@kadalu.io;X-NUM-GUESTS=0:mailto:sankars...@kadalu.io ATTENDEE;CUTYPE=INDIVIDUAL;ROLE=REQ-PARTICIPANT;PARTSTAT=ACCEPTED;RSVP=TRUE ;CN=David Spisla;X-NUM-GUESTS=0:mailto:david.spi...@iternity.com ATTENDEE;CUTYPE=INDIVIDUAL;ROLE=REQ-PARTICIPANT;PARTSTAT=TENTATIVE;RSVP=TRU E;CN=kkeit...@redhat.com;X-NUM-GUESTS=0:mailto:kkeit...@redhat.com X-MICROSOFT-CDO-OWNERAPPTID:971362773 CLASS:PUBLIC CREATED:20200114T065813Z DESCRIPTION:Hello everyone\,Its been some time since Release-7 (Hap pened ~Nov 15\, 2019)\, and we have to get concrete agreement for Release-8 \, and get going.There are already some effort in progress\, and I would suggest all of you who would like to contribute to join the meeting a nd make Release-8 a success\, and a good base for GlusterX execution.We discussed this in today's (Jan 14th) Community meeting that a focused meeting for Release-8 is good. Please join\, and propose the ideas. Ref:* \;https://github.com/gluster/glusterfs/milestone /10">https://github.com/gluster/glusterfs/milestone/10* \;< a href="https://lists.gluster.org/pipermail/gluster-devel/2019-November/056 709.html" id="ow391" __is_owner="true">https://lists.gluster.org/pipermail/ gluster-devel/2019-November/056709.htmlNOTE: This is not so NA friendly time... Happy to have one in that timezone after this meeting. \n\n-::~:~::~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~ :~:~:~:~::~:~::-\nPlease do not edit this section of the description.\n\nTh is event has a video call.\nJoin: https://meet.google.com/uyv-pdyx-nvi\n+1 971-232-0053 PIN: 359163210#\nView more phone numbers: https://tel.meet/uyv -pdyx-nvi?pin=9390107403862&hs=7\n\nView your event at https://www.google.c om/calendar/event?action=VIEW&eid=NmF2MG1ibzU3am1ydjR0NWNjdHNtNXBlY2UgZ2x1c 3Rlci1kZXZlbEBnbHVzdGVyLm9yZw&tok=MTQjYW1hckBrYWRhbHUuaW9kMDU2NzAxY2FmYWQ0M DVkZjUwYjRjZTQ2OWQ0Mzg3NjJlMDMzYTA3&ctz=Asia%2FKolkata&hl=en&es=0.\n-::~:~: :~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~::~ :~::- LAST-MODIFIED:20200114T113520Z LOCATION: SEQUENCE:0 STATUS:CONFIRMED SUMMARY:Gluster Release-8: Planning TRANSP:OPAQUE END:VEVENT END:VCALENDAR invite.ics Description: application/ics ___ Community Meeting Calendar: APAC Schedule - Every 2nd and 4th Tuesday at 11:30 AM IST Bridge: https://bluejeans.com/441850968 NA/EMEA Schedule - Every 1st and 3rd Tuesday at 01:00 PM EDT Bridge: https://bluejeans.com/441850968 Gluster-devel mailing list Gluster-devel@gluster.org https://lists.gluster.org/mailman/listinfo/gluster-devel
[Gluster-devel] Compiling Gluster RPMs for v5.x on Suse SLES15
Dear Gluster Community, I want to compile my own Gluster RPMs for v5.x on a Suse Sles15 machine. I am using the spec file from here: https://github.com/gluster/glusterfs-suse/blob/sles15-glusterfs-5/glusterfs.spec There is a Build Requirement 'rpcgen' which causes confusion to me. I had a chat with Kaleb Keithley a few months ago: https://lists.gluster.org/pipermail/gluster-users/2019-May/036518.html This statement seems to be interesting: „Miuku on #opensuse-buildservice poked around and found that the unbundled rpcgen in SLE_15 comes from the rpcsvc-proto rpm. (Not the rpcgen rpm as it does in Fedora and RHEL8.) All the gluster community packages for SLE_15 going back to glusterfs-5.0 in October 2018 have used the unbundled rpcgen. You can do the same, or remove the BuildRequires: rpcgen line and use the glibc bundled rpcgen.“ Unfortunately there is no rpcsvc-proto rpm for SLES15: https://software.opensuse.org/package/rpcsvc-proto?locale=fa I don't know where the guys from Suse OBS had this rpm from. There is maybe the way to compile the rpcsvc-proto src rpm on a SLES15, but this is no good solution in my opinion. So I tried to remove the 'rpcgen' requirement from the spec file and create the RPMs by using glibc bundled rpcgen according to Kalebs advise. It works and Gluster seems to be running stable. Do you think there are any risks in using glibc bundled rpcgen for creating Gluster 5.x RPMs or should I prefer the rpcgen from rpcsvc-proto rpm ? Regards David Spisla ___ Community Meeting Calendar: APAC Schedule - Every 2nd and 4th Tuesday at 11:30 AM IST Bridge: https://bluejeans.com/441850968 NA/EMEA Schedule - Every 1st and 3rd Tuesday at 01:00 PM EDT Bridge: https://bluejeans.com/441850968 Gluster-devel mailing list Gluster-devel@gluster.org https://lists.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] [Release-8] Thin-Arbiter: Unique-ID requirement
>From a design perspective 2 is a better choice. However I'd like to see a design on how cluster id will be generated and maintained (with peer addition/deletion scenarios, node replacement etc). On Tue, Jan 14, 2020 at 1:42 PM Amar Tumballi wrote: > Hello, > > As we are gearing up for Release-8, and its planning, I wanted to bring up > one of my favorite topics, 'Thin-Arbiter' (or Tie-Breaker/Metro-Cluster etc > etc). > > We have made thin-arbiter release in v7.0 itself, which works great, when > we have just 1 cluster of gluster. I am talking about a situation which > involves multiple gluster clusters, and easier management of thin-arbiter > nodes. (Ref: https://github.com/gluster/glusterfs/issues/763) > > I am working with a goal of hosting a thin-arbiter node service (free of > cost), for which any gluster deployment can connect, and save their cost of > an additional replica, which is required today to not get into split-brain > situation. Tie-breaker storage and process needs are so less that we can > easily handle all gluster deployments till date in just one machine. When I > looked at the code with this goal, I found that current implementation > doesn't support it, mainly because it uses 'volumename' in the file it > creates. This is good for 1 cluster, as we don't allow duplicate volume > names in a single cluster, or OK for multiple clusters, as long as volume > names are not colliding. > > To resolve this properly we have 2 options (as per my thinking now) to > make it truly global service. > > 1. Add 'volume-id' option in afr volume itself, so, each instance picks > the volume-id and uses it in thin-arbiter name. A variant of this is > submitted for review - https://review.gluster.org/23723 but as it uses > volume-id from io-stats, this particular patch fails in case of brick-mux > and shd-mux scenarios. A proper enhancement of this patch is, providing > 'volume-id' option in AFR itself, so glusterd (while generating volfiles) > sends the proper vol-id to instance. > > Pros: Minimal code changes to the above patch. > Cons: One more option to AFR (not exposed to users). > > 2. Add* cluster-id *to glusterd, and pass it to all processes. Let > replicate use this in thin-arbiter file. This too will solve the issue. > > Pros: A cluster-id is good to have in any distributed system, specially > when there are deployments which will be 3 node each in different clusters. > Identifying bricks, services as part of a cluster is better. > > Cons: Code changes are more, and in glusterd component. > > On another note, 1 above is purely for Thin-Arbiter feature only, where as > 2nd option would be useful in debugging, and other solutions which > involves multiple clusters. > > Let me know what you all think about this. This is good to be discussed in > next week's meeting, and taken to completion. > > Regards, > Amar > --- > https://kadalu.io > Storage made easy for k8s > > ___ > > Community Meeting Calendar: > > APAC Schedule - > Every 2nd and 4th Tuesday at 11:30 AM IST > Bridge: https://bluejeans.com/441850968 > > > NA/EMEA Schedule - > Every 1st and 3rd Tuesday at 01:00 PM EDT > Bridge: https://bluejeans.com/441850968 > > Gluster-devel mailing list > Gluster-devel@gluster.org > https://lists.gluster.org/mailman/listinfo/gluster-devel > > ___ Community Meeting Calendar: APAC Schedule - Every 2nd and 4th Tuesday at 11:30 AM IST Bridge: https://bluejeans.com/441850968 NA/EMEA Schedule - Every 1st and 3rd Tuesday at 01:00 PM EDT Bridge: https://bluejeans.com/441850968 Gluster-devel mailing list Gluster-devel@gluster.org https://lists.gluster.org/mailman/listinfo/gluster-devel
[Gluster-devel] [Release-8] Thin-Arbiter: Unique-ID requirement
Hello, As we are gearing up for Release-8, and its planning, I wanted to bring up one of my favorite topics, 'Thin-Arbiter' (or Tie-Breaker/Metro-Cluster etc etc). We have made thin-arbiter release in v7.0 itself, which works great, when we have just 1 cluster of gluster. I am talking about a situation which involves multiple gluster clusters, and easier management of thin-arbiter nodes. (Ref: https://github.com/gluster/glusterfs/issues/763) I am working with a goal of hosting a thin-arbiter node service (free of cost), for which any gluster deployment can connect, and save their cost of an additional replica, which is required today to not get into split-brain situation. Tie-breaker storage and process needs are so less that we can easily handle all gluster deployments till date in just one machine. When I looked at the code with this goal, I found that current implementation doesn't support it, mainly because it uses 'volumename' in the file it creates. This is good for 1 cluster, as we don't allow duplicate volume names in a single cluster, or OK for multiple clusters, as long as volume names are not colliding. To resolve this properly we have 2 options (as per my thinking now) to make it truly global service. 1. Add 'volume-id' option in afr volume itself, so, each instance picks the volume-id and uses it in thin-arbiter name. A variant of this is submitted for review - https://review.gluster.org/23723 but as it uses volume-id from io-stats, this particular patch fails in case of brick-mux and shd-mux scenarios. A proper enhancement of this patch is, providing 'volume-id' option in AFR itself, so glusterd (while generating volfiles) sends the proper vol-id to instance. Pros: Minimal code changes to the above patch. Cons: One more option to AFR (not exposed to users). 2. Add* cluster-id *to glusterd, and pass it to all processes. Let replicate use this in thin-arbiter file. This too will solve the issue. Pros: A cluster-id is good to have in any distributed system, specially when there are deployments which will be 3 node each in different clusters. Identifying bricks, services as part of a cluster is better. Cons: Code changes are more, and in glusterd component. On another note, 1 above is purely for Thin-Arbiter feature only, where as 2nd option would be useful in debugging, and other solutions which involves multiple clusters. Let me know what you all think about this. This is good to be discussed in next week's meeting, and taken to completion. Regards, Amar --- https://kadalu.io Storage made easy for k8s ___ Community Meeting Calendar: APAC Schedule - Every 2nd and 4th Tuesday at 11:30 AM IST Bridge: https://bluejeans.com/441850968 NA/EMEA Schedule - Every 1st and 3rd Tuesday at 01:00 PM EDT Bridge: https://bluejeans.com/441850968 Gluster-devel mailing list Gluster-devel@gluster.org https://lists.gluster.org/mailman/listinfo/gluster-devel