My vote for Consensus Based Replication (CBR)
Also because HONDA CBR ;)
http://www.honda2wheelersindia.com/cbr1000rr/
~Joe
- Original Message -
From: "Avra Sengupta"
To: gluster-devel@gluster.org
Sent: Friday, February 12, 2016 2:05:41 PM
Subject: Re:
Well, we got quite a few suggestions. So I went ahead and created a
doodle poll. Please find the link below for the poll, and vote for the
name you think will be the best.
http://doodle.com/poll/h7gfdhswrbsxxiaa
Regards,
Avra
On 01/21/2016 12:21 PM, Avra Sengupta wrote:
On 01/21/2016 12:20
Hi
W dniu 2016-02-12 o 07:04, Soumya Koduri pisze:
Hi Piotr,
Could you apply below gfAPI patch and check the valgrind output -
http://review.gluster.org/13125
tried both patches, on client and my 2 bricks. Even recompiled qemu. No
change - stil leaks (Although few bytes less).
On Fri, Feb 12, 2016 at 02:05:41PM +0530, Avra Sengupta wrote:
> Well, we got quite a few suggestions. So I went ahead and created a doodle
> poll. Please find the link below for the poll, and vote for the name you
> think will be the best.
>
> http://doodle.com/poll/h7gfdhswrbsxxiaa
Thanks!
I
What would happen if I:
- Did not disable quotas
- Did not stop the volume (140T volume takes at least 3-4 days to do
any find operations, which is too much downtime)
- Find and remove all xattrs:
trusted.glusterfs.quota.242dcfd9-6aea-4cb8-beb2-c0ed91ad70d3.contri on
the /brick/volumename/modules
So after waiting out the process of disabling quotas, waiting for the
xattrs to be cleaned up, re-enabling quotas and waiting for the
xattr's to be created, then applying quotas I'm running into the same
issue.
Yesterday at ~2pm one of the quotas was listed as:
/modules|100.0GB|18.3GB|81.7GB
I
W dniu 2016-02-11 o 16:02, Piotr Rybicki pisze:
Hi All
I have to report, that there is a mem leak latest version of gluster
gluster: 3.7.8
libvirt 1.3.1
mem leak exists when starting domain (virsh start DOMAIN) which acesses
drivie via libgfapi (although leak is much smaller than with
Kaleb Keithley wrote on 04/02/2016 06:40:
>
> If you're a Debian Wheezy user please give the new packages a try.
Hi Kaleb,
Apologies for the delay in getting back to you. I tried the upgrade on one node
last week and it failed but I hadn't had the time to try it again without the
feeling of
Kaushal M wrote on 10/02/2016 11:44:
[...]
>>>
>>> Excerpt of etc-glusterfs-glusterd.vol.log follows. Other than the errors,
>>> the
>>> thing that sticks out to me is in the management volume definition where it
>>> says "option transport-type rdma" as we're not using rdma. This may of
>>>
I actually had this problem with CentOS 7 and glusterfs 3.7.x
I downgraded to 3.6.x and the crashes stopped.
See https://bugzilla.redhat.com/show_bug.cgi?id=1234877
It may be the same issue.
I am still in the old samba-vfs-glusterfs-4.1.12-23.el7_1.x86_64 and
glusterfs-3.6.6-1.el7.x86_64 on
Kaleb Keithley wrote on 10/02/2016 11:28:
>
> Please attach the logs to https://bugzilla.redhat.com/show_bug.cgi?id=1304348
> (or mail them to Kaushal and/or me.
Log files have been attached to the ticket.
Ronny
--
Ronny Adsetts
Technical Director
Amazing Internet Ltd, London
t: +44 20 8977
Btw,
Issuing:
gluster vol set usr_global diagnostics.brick-log-level CRITICAL
Crash my bricks :/
https://gist.github.com/CyrilPeponnet/11954cbca725d4b8da7a
--
Cyril Peponnet
On Feb 9, 2016, at 8:56 AM, Vijay Bellur wrote:
On 02/08/2016 01:14 AM, Kotresh Hiremath
Hi All
I have to report, that there is a mem leak latest version of gluster
gluster: 3.7.8
libvirt 1.3.1
mem leak exists when starting domain (virsh start DOMAIN) which acesses
drivie via libgfapi (although leak is much smaller than with gluster 3.5.X).
I believe libvirt itself uses
On Thu, Feb 11, 2016 at 5:28 AM, Vijay Bellur wrote:
> Hi All,
>
> We will be migrating our Gerrit and (possibly) Jenkins services from the
> current hosted infrastructure on Friday, 12th February. To accomplish the
> migration, we will have a downtime for these services
Hey,
Sorry for the late reply but I missed this e-mail. With respect to identifying
locking domains, we use the identical logic that GlusterFS itself uses to
identify the domains; which is just a simple string comparison if I'm not
mistaken. System processes (SHD/Rebalance) locking domains
Hey Ravi,
I'll ping Shreyas about this today. There's also a patch we'll need for
multi-threaded SHD to fix the least-pri queuing. The PID of the process wasn't
tagged correctly via the call frame in my original patch. The patch below
fixes this (for 3.6.3), I didn't see multi-threaded self
I have multiple bricks crashing in production. Any help would be greatly
appreciated.
The crash log is in this bug report:
https://bugzilla.redhat.com/show_bug.cgi?id=1307146
Looks like it's crashing in pl_inodelk_client_cleanup
___
Gluster-devel
Taking a look. Give me some time.
-Krutika
- Original Message -
> From: "Joe Julian"
> To: "Krutika Dhananjay" , gluster-devel@gluster.org
> Sent: Saturday, February 13, 2016 6:02:13 AM
> Subject: Fwd: [Gluster-devel] 3.6.8 crashing a lot
Could this be a regression from http://review.gluster.org/7981 ?
Forwarded Message
Subject:[Gluster-devel] 3.6.8 crashing a lot in production
Date: Fri, 12 Feb 2016 16:20:59 -0800
From: Joe Julian
To: gluster-us...@gluster.org,
I've also got several glusterfsd processes that have stopped responding.
A backtrace from a live core, strace, and state dump follow:
Thread 10 (LWP 31587):
#0 0x7f81d384289c in __lll_lock_wait () from
/lib/x86_64-linux-gnu/libpthread.so.0
#1 0x7f81d383e065 in _L_lock_858 () from
On 02/13/2016 12:13 AM, Richard Wareing wrote:
Hey Ravi,
I'll ping Shreyas about this today. There's also a patch we'll need for
multi-threaded SHD to fix the least-pri queuing. The PID of the process wasn't
tagged correctly via the call frame in my original patch. The patch below
fixes
21 matches
Mail list logo