So,
I am running 4.1.x and I started to use tiering.
I ran in a load of problem where my email server would get kernel
panick, starting 12 hours after the change.
I am in the process of detaching the tier.
I saw that in version 6, tier feature was completely removed.
I am under the
temporarily.
Hope that makes sense of what’s going on for you,
-Darrell
On Aug 23, 2019, at 5:06 PM, Carl Sirotic
<mailto:csiro...@evoqarchitecture.com>> wrote:
Okay,
so it means, at least I am not getting the expected behavior and
there is hope.
I put the quorum settings that I
-- Forwarded message --From: Carl Sirotic Date: Aug. 23, 2019 7:00 p.m.Subject: Re: [Gluster-users] Brick Reboot => VMs slowdown, client crashesTo: Joe Julian Cc:
AVIS DE CONFIDENTIALITÉ : Ce courriel peut contenir de l'information privilégiée et confidentielle. Nous v
settings?
Ingo
Am 23.08.2019 um 15:53 schrieb Carl Sirotic
mailto:csiro...@evoqarchitecture.com>>:
However,
I must have misunderstood the whole concept of gluster.
In a replica 3, for me, it's completely unacceptable, regardless of
the options, that all my VMs go down when I rebo
However,
I must have misunderstood the whole concept of gluster.
In a replica 3, for me, it's completely unacceptable, regardless of the
options, that all my VMs go down when I reboot one node.
The whole purpose of having a full 3 copy of my data on the fly is
suposed to be this.
I am in
apply the gluster virt group to your volumes, or at least
features.shard = on on your VM volume?
On Aug 19, 2019, at 11:05 AM, Carl Sirotic
wrote:
Yes, I made sure there was no heal.
This is what I am suspecting thet shutting down a host isn't the right way to
go.
Hi Carl, Did you check for any
the clients will eait for a timeout before restoring full functionality.
You can stop your glusterd and actually all processes by using a script in /usr/share/gluster/scripts (the path is based on memory and could be wrong).
Best Regards,
Strahil NikllovOn Aug 19, 2019 18:34, Carl Sirotic wrote
Hi,
we have a replicate 3 cluster.
2 other servers are clients that run VM that are stored on the Gluster
volumes.
I had to reboot one of the brick for maintenance.
The whole VM setup went super slow and some of the client crashed.
I think there is some timeout setting for KVM/Qemu vs
that remained up
during maintenance.
-John
On Wed, Jul 3, 2019 at 3:48 PM Carl Sirotic
mailto:csiro...@evoqarchitecture.com>>
wrote:
I have a replica 3 cluster, 3 nodes with bricks and 2 "client" nodes,
that run the VMs through a mount of the data on the bricks.
Now, o
I have a replica 3 cluster, 3 nodes with bricks and 2 "client" nodes,
that run the VMs through a mount of the data on the bricks.
Now, one of the bricks need maintenance and I will need to shut it down
for about 15 minutes.
I didn't find any information on what I am suposed to do.
If I get
Thank you for those answers.
I will take time to ponder if glusterfs is the solution I was looking for
in this case.
Thank you.
On Tue, Dec 18, 2018 at 10:36 PM csirotic wrote:
> Hi,
> I am new to using gluster and I am running some tests right now. I am
> fairly inexperienced as well, so
11 matches
Mail list logo