the heal info command shows perfect consistency between nodes; that's what
confused me. At the moment, the physical partitions (lvm partitions) that
gluster is using are different sizes, but I expected to see the "least
common denominator" for the total size, and I expected to see it consistant
ac
On Sun, Aug 6, 2017 at 4:42 AM, Jim Kusznir wrote:
> Well, after a very stressful weekend, I think I have things largely
> working. Turns out that most of the above issues were caused by the linux
> permissions of the exports for all three volumes (they had been reset to
> 600; setting them to 7
Well, after a very stressful weekend, I think I have things largely
working. Turns out that most of the above issues were caused by the linux
permissions of the exports for all three volumes (they had been reset to
600; setting them to 774 or 770 fixed many of the issues). Of course, I
didn't fin
Hi all:
Today has been rough. two of my three nodes went down today, and self heal
has not been healing well. 4 hours later, VMs are running. but the engine
is not happy. It claims the storage domain is down (even though it is up
on all hosts and VMs are running). I'm getting a ton of these m
4 matches
Mail list logo