On 18 November 2014 17:46, Franco Broi franco.b...@iongeo.com wrote:
Try strace -Ff -e file -p 'glusterfsd pid'
Thanks, Attached
Process 27115 attached with 25 threads - interrupt to quit
[pid 27122] stat(/mnt/gluster-brick1/datastore, {st_mode=S_IFDIR|0755,
st_size=4, ...}) = 0
[pid 11840]
Can't see how any of that could account for 1000% cpu unless it's just
stuck in a loop.
On Tue, 2014-11-18 at 18:00 +1000, Lindsay Mathieson wrote:
On 18 November 2014 17:46, Franco Broi franco.b...@iongeo.com wrote:
Try strace -Ff -e file -p 'glusterfsd pid'
Thanks, Attached
On 18 November 2014 18:05, Franco Broi franco.b...@iongeo.com wrote:
Can't see how any of that could account for 1000% cpu unless it's just
stuck in a loop.
Currently still varying between 400% to 950%
Can glusterfsd be killed without effecting the lgfapi clients? (KVM's)
On 11/18/2014 01:17 PM, Lindsay Mathieson wrote:
On 18 November 2014 17:40, Pranith Kumar Karampuri pkara...@redhat.com wrote:
Sorry didn't see this one. I think this is happening because of 'diff' based
self-heal which does full file checksums, that I believe is the root cause.
Could you
On Tue, 18 Nov 2014 02:36:19 PM Pranith Kumar Karampuri wrote:
On 11/18/2014 01:17 PM, Lindsay Mathieson wrote:
On 18 November 2014 17:40, Pranith Kumar Karampuri pkara...@redhat.com
wrote:
However given the files are tens of GB in size, won't it thrash my
network?
Yes you are right.
When I run the subject I get:
root@vnb:~# gluster volume heal datastore1 info
Brick vnb:/mnt/gluster-brick1/datastore/
/images/100/vm-100-disk-1.qcow2 - Possibly undergoing heal
gfid:8759bea0-ab64-4f7b-87b3-69217ebfee55
gfid:427efbbf-408e-4de8-b97d-16a2ba756a52
On 11/18/2014 04:14 PM, Lindsay Mathieson wrote:
On Tue, 18 Nov 2014 02:36:19 PM Pranith Kumar Karampuri wrote:
On 11/18/2014 01:17 PM, Lindsay Mathieson wrote:
On 18 November 2014 17:40, Pranith Kumar Karampuri pkara...@redhat.com
wrote:
However given the files are tens of GB in size,
- Original Message -
From: Lindsay Mathieson lindsay.mathie...@gmail.com
To: gluster-users gluster-users@gluster.org
Sent: Tuesday, November 18, 2014 4:24:51 PM
Subject: [Gluster-users] gluster volume heal datastore info question
When I run the subject I get:
On Tue, 18 Nov 2014 06:16:53 AM you wrote:
heal info command that you executed basically gives a list of files to be
healed. So in the above output, 1 entry is possibly getting healed and
other 7 need to be healed.
And what is a gfid?
In glusterfs, gfid (glusterfs ID) is similar to
I have a VM image which is a sparse file - 512GB allocated, but only 32GB used.
root@vnb:~# ls -lh /mnt/gluster-brick1/datastore/images/100
total 31G
-rw--- 2 root root 513G Nov 18 19:57 vm-100-disk-1.qcow2
I switched to full sync and rebooted.
heal was started on the image and it seemed
On Mon, Nov 17, 2014 at 08:57:01PM +0100, Niels de Vos wrote:
Hi all,
Tomorrow (Tuesday) we will have an other Gluster Community Bug Triage
meeting.
Meeting details:
- location: #gluster-meeting on Freenode IRC
- date: every Tuesday
- time: 12:00 UTC, 13:00 CET (in your terminal, run:
On 11/18/2014 05:35 PM, Lindsay Mathieson wrote:
I have a VM image which is a sparse file - 512GB allocated, but only
32GB used.
root@vnb:~# ls -lh /mnt/gluster-brick1/datastore/images/100
total 31G
-rw--- 2 root root 513G Nov 18 19:57 vm-100-disk-1.qcow2
I switched to full sync and
On 11/18/2014 06:56 AM, Pranith Kumar Karampuri wrote:
On 11/18/2014 05:35 PM, Lindsay Mathieson wrote:
I have a VM image which is a sparse file - 512GB allocated, but only
32GB used.
root@vnb:~# ls -lh /mnt/gluster-brick1/datastore/images/100
total 31G
-rw--- 2 root root 513G Nov 18
The Gluster community is please to announce the release of updated
releases for the 3.4 and 3.5 family. With the release of 3.6 a few weeks
ago, this is brings all the current members of GlusterFS into a more
stable, production ready status.
The GlusterFS 3.4.6 release is focused on bug
Hello, I was wondering if there has been any progress on reproducing this error
or if there is any more info I can provide.
Thanks, Jon
___
Gluster-users mailing list
Gluster-users@gluster.org
Did you have the dame problem? Is it a memory leak?
2014-11-18 16:28 GMT-03:00 Tamas Papp tom...@martos.bme.hu:
Switching to 3.5 helped us a _lot_.
--
Sent from mobile
On November 18, 2014 7:48:45 PM Juan José Pavlik Salles
jjpav...@gmail.com wrote:
Hi guys, I've a small cluster with
On 11/18/2014 09:41 PM, Juan José Pavlik Salles wrote:
Did you have the dame problem? Is it a memory leak?
Yes.
After the upgrade it works quite well.
tamas
___
Gluster-users mailing list
Gluster-users@gluster.org
Just some basic question on the heal process, please just point me to the docs
if they are there :)
- How is the need for a heal detected? I presume nodes can detect when they
can't sync writes to the other nodes. This is flagged (xattr?) for healing
when the other nodes are back up?
- How is
Seems like a big jump to take, updating from 3.3.2 to 3.5, is it a
plugplay upgrade?
2014-11-18 17:51 GMT-03:00 Tamas Papp tom...@martos.bme.hu:
On 11/18/2014 09:41 PM, Juan José Pavlik Salles wrote:
Did you have the dame problem? Is it a memory leak?
Yes.
After the upgrade it works
On 11/19/2014 03:23 AM, Lindsay Mathieson wrote:
Just some basic question on the heal process, please just point me to the
docs
if they are there :)
- How is the need for a heal detected? I presume nodes can detect when they
can't sync writes to the other nodes. This is flagged (xattr?)
On 11/19/2014 02:27 AM, Juan José Pavlik Salles wrote:
Seems like a big jump to take, updating from 3.3.2 to 3.5, is it a
plugplay upgrade?
Yes, it was easy.
t
___
Gluster-users mailing list
Gluster-users@gluster.org
21 matches
Mail list logo