I believe some of the staff tested gfapi back in the 3.4 days and found
that at least that version didn't make a perceptible difference though
it tested about 10% faster. We stayed with Fuse because at that time
gfapi was still newish and we had used fuse for quite awhile and
understood it.
Hmm in that case, could you do the following:
If you could recreate this issue again, could you record the extended
attributes associated with
the shards that you see being listed against the replaced-brick in
heal-info.
For example, assuming you replaced the brick on vng, once heal kicks in,
On 17/05/2016 7:56 PM, Krutika Dhananjay wrote:
Would you know where the logs of individual vms are with proxmox?
No i don't I'm afraid. Would they be a function of gluster or qemu?
--
Lindsay Mathieson
___
Gluster-users mailing list
Would you know where the logs of individual vms are with proxmox?
In those, do you see any libgfapi/gluster log messages?
-Krutika
On Tue, May 17, 2016 at 8:38 AM, Lindsay Mathieson <
lindsay.mathie...@gmail.com> wrote:
> On 17 May 2016 at 10:02, WK wrote:
> > That being
On 17 May 2016 at 10:02, WK wrote:
> That being said, when we lose a brick, we've traditionally just live
> migrated those VMs off onto other clusters because we didn't want to take
> the heal hit which at best slowed down our VMs at on the pickier ones cause
> them to RO out.
>
That should be an important clue.
That being said, when we lose a brick, we've traditionally just live
migrated those VMs off onto other clusters because we didn't want to
take the heal hit which at best slowed down our VMs at on the pickier
ones cause them to RO out.
We have not yet
Ok, this is probably an interesting data point. I was unable to
reproduce the problem when using the fuse mount.
Its late here so I might not have time to repeat with the gfapi, but I
will tomorrow.
On 16/05/2016 4:55 PM, Krutika Dhananjay wrote:
Yes, that would probably be useful in terms
Yes, that would probably be useful in terms of at least having access to
the client logs.
-Krutika
On Mon, May 16, 2016 at 12:18 PM, Lindsay Mathieson <
lindsay.mathie...@gmail.com> wrote:
> On 16 May 2016 at 16:46, Krutika Dhananjay wrote:
> > Could you share the mount
Hi,
Could you share the mount and glustershd logs for investigation?
-Krutika
On Sun, May 15, 2016 at 12:22 PM, Lindsay Mathieson <
lindsay.mathie...@gmail.com> wrote:
> On 15/05/2016 12:45 AM, Lindsay Mathieson wrote:
>
> *First off I tried removing/adding a brick.*
>
> gluster v
On 15/05/2016 12:45 AM, Lindsay Mathieson wrote:
*First off I tried removing/adding a brick.*
gluster v remove-brick replica 2
vng.proxmox.softlog:/tank/vmdata/test1 force.
That worked fine, VM's (on another node) kept running without a hiccup
I deleted /tank/vmdata/test1, then
10 matches
Mail list logo