On 12/14/2015 04:44 PM, Udo Giacomozzi wrote:
Hi,
it happened again:
today I've upgraded some packages on node #3. Since the Kernel had a
minor update, I was asked to reboot the server, and did so.
At that time only one (non-critical) VM was running on that node. I've
checked twice and
Hi,
it happened again:
today I've upgraded some packages on node #3. Since the Kernel had a
minor update, I was asked to reboot the server, and did so.
At that time only one (non-critical) VM was running on that node. I've
checked twice and Gluster was *not* healing when I've rebooted.
Am 09.12.2015 um 22:33 schrieb Lindsay Mathieson:
On 10/12/2015 3:15 AM, Udo Giacomozzi wrote:
This were the commands executed on node #2 during step 6:
gluster volume add-brick "systems" replica 3
metal1:/data/gluster/systems
gluster volume heal "systems" full # to trigger
Am 09.12.2015 um 14:39 schrieb Lindsay Mathieson:
Udo, it occurs to me that if your VM's were running on #2 & #3 and you
live migrated them to #1 prior to rebooting #2/3, then you would
indeed rapidly get progressive VM corruption.
However it wouldn't be due to the heal process, but rather
Am 08.12.2015 um 07:57 schrieb Krutika Dhananjay:
quick-read=off
read-ahead=off
io-cache=off
stat-prefetch=off
eager-lock=enable
remote-dio=enable
quorum-type=auto
server-quorum-type=server
Perfectly put. I am one of the devs who work on replicate module. You
On 7/12/2015 9:03 PM, Udo Giacomozzi wrote:
All VMs were running on machine #1 - the two other machines (#2 and
#3) were *idle*.
Gluster was fully operating (no healing) when I rebooted machine #2.
For other reasons I had to reboot machines #2 and #3 a few times, but
since all VMs were
Am 08.12.2015 um 02:59 schrieb Lindsay Mathieson:
Hi Udo, thanks for posting your volume info settings. Please note for
the following, I am not one of the devs, just a user, so unfortunately
I have no authoritative answers :(
I am running a very similar setup - Proxmox 4.0, three nodes, but
On 10/12/2015 3:15 AM, Udo Giacomozzi wrote:
This were the commands executed on node #2 during step 6:
gluster volume add-brick "systems" replica 3
metal1:/data/gluster/systems
gluster volume heal "systems" full # to trigger sync
Then I waited for replication to finish before
Am 09.12.2015 um 17:17 schrieb Joe Julian:
A-1) shut down node #1 (the first that is about to be upgraded)
A-2) remove node #1 from the Proxmox cluster (/pvevm delnode "metal1"/)
A-3) remove node #1 from the Gluster volume/cluster (/gluster volume
remove-brick ... && gluster peer detach
On 8/12/2015 4:57 PM, Krutika Dhananjay wrote:
You can alternatively enable this configuration in one shot using the
following command for VM workloads:
# gluster volume set group virt
Alas, it is not packaged in the debian packages:)
vi /var/lib/glusterd/groups/virt
And add the
- Original Message -
> From: "Lindsay Mathieson" <lindsay.mathie...@gmail.com>
> To: gluster-users@gluster.org
> Sent: Tuesday, December 8, 2015 7:29:00 AM
> Subject: Re: [Gluster-users] Strange file corruption
> Hi Udo, thanks for posting your vo
Am 07.12.2015 um 15:01 schrieb Lindsay Mathieson:
On 7/12/2015 9:03 PM, Udo Giacomozzi wrote:
Setup details:
- Proxmox 4.0 cluster (not yet in HA mode) = Debian 8 Jessie
- redundant Gbit LAN (bonding)
- Gluster 3.5.2 (most current Proxmox package)
- two volumes, both "replicate" type, 1 x 3 =
On 7/12/2015 9:03 PM, Udo Giacomozzi wrote:
Setup details:
- Proxmox 4.0 cluster (not yet in HA mode) = Debian 8 Jessie
- redundant Gbit LAN (bonding)
- Gluster 3.5.2 (most current Proxmox package)
- two volumes, both "replicate" type, 1 x 3 = 3 bricks
- cluster.server-quorum-ratio: 51%
Hi all,
yesterday I had a strange situation where Gluster healing corrupted
*all* my VM images.
In detail:
I had about 15 VMs running (in Proxmox 4.0) totaling about 600 GB of
qcow2 images. Gluster is used as storage for those images in replicate 3
setup (ie. 3 physical servers
Hi Udo, thanks for posting your volume info settings. Please note for
the following, I am not one of the devs, just a user, so unfortunately I
have no authoritative answers :(
I am running a very similar setup - Proxmox 4.0, three nodes, but using
ceph for our production storage. Am heavily
15 matches
Mail list logo