Re: [Gluster-devel] NSR: Suggestions for a new name

2016-02-12 Thread Joseph Fernandes
My vote for Consensus Based Replication (CBR) Also because HONDA CBR ;) http://www.honda2wheelersindia.com/cbr1000rr/ ~Joe - Original Message - From: "Avra Sengupta" To: gluster-devel@gluster.org Sent: Friday, February 12, 2016 2:05:41 PM Subject: Re:

Re: [Gluster-devel] NSR: Suggestions for a new name

2016-02-12 Thread Avra Sengupta
Well, we got quite a few suggestions. So I went ahead and created a doodle poll. Please find the link below for the poll, and vote for the name you think will be the best. http://doodle.com/poll/h7gfdhswrbsxxiaa Regards, Avra On 01/21/2016 12:21 PM, Avra Sengupta wrote: On 01/21/2016 12:20

Re: [Gluster-devel] libgfapi libvirt memory leak version 3.7.8

2016-02-12 Thread Piotr Rybicki
Hi W dniu 2016-02-12 o 07:04, Soumya Koduri pisze: Hi Piotr, Could you apply below gfAPI patch and check the valgrind output - http://review.gluster.org/13125 tried both patches, on client and my 2 bricks. Even recompiled qemu. No change - stil leaks (Although few bytes less).

Re: [Gluster-devel] NSR: Suggestions for a new name

2016-02-12 Thread Niels de Vos
On Fri, Feb 12, 2016 at 02:05:41PM +0530, Avra Sengupta wrote: > Well, we got quite a few suggestions. So I went ahead and created a doodle > poll. Please find the link below for the poll, and vote for the name you > think will be the best. > > http://doodle.com/poll/h7gfdhswrbsxxiaa Thanks! I

Re: [Gluster-devel] [Gluster-users] Quota list not reflecting disk usage

2016-02-12 Thread Steve Dainard
What would happen if I: - Did not disable quotas - Did not stop the volume (140T volume takes at least 3-4 days to do any find operations, which is too much downtime) - Find and remove all xattrs: trusted.glusterfs.quota.242dcfd9-6aea-4cb8-beb2-c0ed91ad70d3.contri on the /brick/volumename/modules

Re: [Gluster-devel] [Gluster-users] Quota list not reflecting disk usage

2016-02-12 Thread Steve Dainard
So after waiting out the process of disabling quotas, waiting for the xattrs to be cleaned up, re-enabling quotas and waiting for the xattr's to be created, then applying quotas I'm running into the same issue. Yesterday at ~2pm one of the quotas was listed as: /modules|100.0GB|18.3GB|81.7GB I

Re: [Gluster-devel] libgfapi libvirt memory leak version 3.7.8

2016-02-12 Thread Piotr Rybicki
W dniu 2016-02-11 o 16:02, Piotr Rybicki pisze: Hi All I have to report, that there is a mem leak latest version of gluster gluster: 3.7.8 libvirt 1.3.1 mem leak exists when starting domain (virsh start DOMAIN) which acesses drivie via libgfapi (although leak is much smaller than with

Re: [Gluster-devel] [Gluster-users] GlusterFS-3.7.6-2 packages for Debian Wheezy now available

2016-02-12 Thread Ronny Adsetts
Kaleb Keithley wrote on 04/02/2016 06:40: > > If you're a Debian Wheezy user please give the new packages a try. Hi Kaleb, Apologies for the delay in getting back to you. I tried the upgrade on one node last week and it failed but I hadn't had the time to try it again without the feeling of

Re: [Gluster-devel] [Gluster-users] GlusterFS-3.7.6-2 packages for Debian Wheezy now available

2016-02-12 Thread Ronny Adsetts
Kaushal M wrote on 10/02/2016 11:44: [...] >>> >>> Excerpt of etc-glusterfs-glusterd.vol.log follows. Other than the errors, >>> the >>> thing that sticks out to me is in the management volume definition where it >>> says "option transport-type rdma" as we're not using rdma. This may of >>>

Re: [Gluster-devel] [Gluster-users] Gluster samba panic?

2016-02-12 Thread Diego Remolina
I actually had this problem with CentOS 7 and glusterfs 3.7.x I downgraded to 3.6.x and the crashes stopped. See https://bugzilla.redhat.com/show_bug.cgi?id=1234877 It may be the same issue. I am still in the old samba-vfs-glusterfs-4.1.12-23.el7_1.x86_64 and glusterfs-3.6.6-1.el7.x86_64 on

Re: [Gluster-devel] [Gluster-users] GlusterFS-3.7.6-2 packages for Debian Wheezy now available

2016-02-12 Thread Ronny Adsetts
Kaleb Keithley wrote on 10/02/2016 11:28: > > Please attach the logs to https://bugzilla.redhat.com/show_bug.cgi?id=1304348 > (or mail them to Kaushal and/or me. Log files have been attached to the ticket. Ronny -- Ronny Adsetts Technical Director Amazing Internet Ltd, London t: +44 20 8977

Re: [Gluster-devel] changelog bug

2016-02-12 Thread Peponnet, Cyril (Nokia - CA)
Btw, Issuing: gluster vol set usr_global diagnostics.brick-log-level CRITICAL Crash my bricks :/ https://gist.github.com/CyrilPeponnet/11954cbca725d4b8da7a -- Cyril Peponnet On Feb 9, 2016, at 8:56 AM, Vijay Bellur wrote: On 02/08/2016 01:14 AM, Kotresh Hiremath

[Gluster-devel] libgfapi libvirt memory leak version 3.7.8

2016-02-12 Thread Piotr Rybicki
Hi All I have to report, that there is a mem leak latest version of gluster gluster: 3.7.8 libvirt 1.3.1 mem leak exists when starting domain (virsh start DOMAIN) which acesses drivie via libgfapi (although leak is much smaller than with gluster 3.5.X). I believe libvirt itself uses

Re: [Gluster-devel] Downtime for Gerrit and Jenkins - 12th February

2016-02-12 Thread Kaushal M
On Thu, Feb 11, 2016 at 5:28 AM, Vijay Bellur wrote: > Hi All, > > We will be migrating our Gerrit and (possibly) Jenkins services from the > current hosted infrastructure on Friday, 12th February. To accomplish the > migration, we will have a downtime for these services

Re: [Gluster-devel] Feature: Automagic lock-revocation for features/locks xlator (v3.7.x)

2016-02-12 Thread Richard Wareing
Hey, Sorry for the late reply but I missed this e-mail. With respect to identifying locking domains, we use the identical logic that GlusterFS itself uses to identify the domains; which is just a simple string comparison if I'm not mistaken. System processes (SHD/Rebalance) locking domains

Re: [Gluster-devel] Throttling xlator on the bricks

2016-02-12 Thread Richard Wareing
Hey Ravi, I'll ping Shreyas about this today. There's also a patch we'll need for multi-threaded SHD to fix the least-pri queuing. The PID of the process wasn't tagged correctly via the call frame in my original patch. The patch below fixes this (for 3.6.3), I didn't see multi-threaded self

[Gluster-devel] 3.6.8 crashing a lot in production

2016-02-12 Thread Joe Julian
I have multiple bricks crashing in production. Any help would be greatly appreciated. The crash log is in this bug report: https://bugzilla.redhat.com/show_bug.cgi?id=1307146 Looks like it's crashing in pl_inodelk_client_cleanup ___ Gluster-devel

Re: [Gluster-devel] 3.6.8 crashing a lot in production

2016-02-12 Thread Krutika Dhananjay
Taking a look. Give me some time. -Krutika - Original Message - > From: "Joe Julian" > To: "Krutika Dhananjay" , gluster-devel@gluster.org > Sent: Saturday, February 13, 2016 6:02:13 AM > Subject: Fwd: [Gluster-devel] 3.6.8 crashing a lot

[Gluster-devel] Fwd: 3.6.8 crashing a lot in production

2016-02-12 Thread Joe Julian
Could this be a regression from http://review.gluster.org/7981 ? Forwarded Message Subject:[Gluster-devel] 3.6.8 crashing a lot in production Date: Fri, 12 Feb 2016 16:20:59 -0800 From: Joe Julian To: gluster-us...@gluster.org,

[Gluster-devel] 3.6.8 glusterfsd processes not responding

2016-02-12 Thread Joe Julian
I've also got several glusterfsd processes that have stopped responding. A backtrace from a live core, strace, and state dump follow: Thread 10 (LWP 31587): #0 0x7f81d384289c in __lll_lock_wait () from /lib/x86_64-linux-gnu/libpthread.so.0 #1 0x7f81d383e065 in _L_lock_858 () from

Re: [Gluster-devel] Throttling xlator on the bricks

2016-02-12 Thread Pranith Kumar Karampuri
On 02/13/2016 12:13 AM, Richard Wareing wrote: Hey Ravi, I'll ping Shreyas about this today. There's also a patch we'll need for multi-threaded SHD to fix the least-pri queuing. The PID of the process wasn't tagged correctly via the call frame in my original patch. The patch below fixes