Olivier,

Thanks for your reply.  Can you explain what you mean by:

> Instead of configuring your 8 disks in RAID 0, I would use JOBD and
> let Gluster do the concatenation. That way, when you replace a disk,
> you just have 125 GB to self-heal.

Thanks,
Don





----- Original Message -----
From: "Olivier Nicole" <[email protected]>
To: [email protected]
Cc: [email protected]
Sent: Tuesday, October 4, 2011 10:37:16 PM
Subject: Re: [Gluster-users] Gluster on EC2 - how to replace failed EBS volume?

Hi Don,

> 1. Remove the brick from the Gluster volume, stop the array, detach the 8 
> vols, make new vols from last good snapshot, attach new vols, restart array, 
> re-add brick to volume, perform self-heal.
> 
> or
> 
> 2. Remove the brick from the Gluster volume, stop the array, detach the 8 
> vols, make brand new empty volumes, attach new vols, restart array, re-add 
> brick to volume, perform self-heal.  Seems like this one would take forever 
> and kill performance.

I am very new to Gluster, but I would think that solution 2 is the
safest: you don't mix-up the rebuild from two different sources, only
Gluster is involved in rebuilding.

Though I have read that you can self-heal with a time parameter to
limit the find to the files that were modified since your brick was
off line. So I beleive that could be extended to the time since your
snapshot.

Instead of configuring your 8 disks in RAID 0, I would use JOBD and
let Gluster do the concatenation. That way, when you replace a disk,
you just have 125 GB to self-heal.

Best regards,

Olivier
_______________________________________________
Gluster-users mailing list
[email protected]
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users

Reply via email to