> From: ??
> I have 3 peers. eg. P1, P2 and P3, and each of them has 2 bricks,
> e.g. P1 have 2 bricks, b1 and b2.
>P2 has 2 bricks, b3 and b4.
>P3 has 2 bricks, b5 and b6.
>
> Based that above, I create a volume (afr volume) like this:
>
> b1 and b3
> Please refer to Bitrot Feature:
> http://www.gluster.org/community/documentation/index.php/Features/BitRot
> I suppose it is already quite mature because it was already listed as a
> feature on RedHat Gluster Storage 3.2 Administration Guide.
If you are only looking for *detection*, it
> Is there a way I can change this existing volume to be replica 3 arbiter 1,
> or do I need to create new volumes and rsync the data?
No, you can't add an arbiter brick on to a volume currently in 3.7.
See:
http://www.gluster.org/pipermail/gluster-users/2016-March/025664.html
According to
How does tiering interact with sharding?
If a volume is both sharded and tiered, and a large file is split into shards,
will the entire logical file be moved between hot and cold tiers? Or will only
individual shards be migrated?
I didn't see this covered in the documentation. Thanks
Anyone have any success in updating to 3.7.9 on Debian Jessie?
I'm seeing dependency problems, when trying to install 3.7.9 using the Debian
Jessie packages on download.gluster.org.
For example, it says it wants liburcu4.
Depends: liburcu4 (>= 0.8.4) but it is not installable
I can
> On Mar 3, 2016, at 7:12 AM, p...@email.cz wrote:
>
> will extend replica 2 to replica 3 ( arbiter ) ASAP .
Anyone know how to do that? The add-brick command in 3.7.8 doesn’t let me
specify “arbiter 1” after “replica 3”. I thought I read that the ability to
change to an arbiter volume
>> 1. the command "gluster v replace-brick " is async or sync? The
> replace is
>> complete when the command quit ?
> It is a sync command, replacing the brick finishes as the command returns.
Hmm, that has not been my experience with 3.7.6 and 3.7.8. Perhaps there is a
question of
>The command "heal full" is async and "heal info" show nothing need to heal.
>How could I know when the "heal full" has completed to replicate these
>files(a,b and c)?How to monitor the progress?
I'm not a gluster expert; I'm pretty new to this. Yes, I've had the same
problem and it is
> The "unable to get index-dir on .." messages you saw in log are not
> harmful in this scenario.
> A simple explanation : when you have 1 new node and 2 old nodes, the
> self-heal-deamon and
> heal commands run on the new node are expecting that the index-dir
> "/.glusterfs/xattrop/dirty"
>1. the command "gluster v replace-brick " is async or sync? The replace is
>complete when the command quit ?
Async. The command will end immediately, and the replace will continue in the
background.
Use "gluster volume replace-brick VOLUME-NAME OLD-BRICK NEW-BRICK status" to
monitor
>> I’m still trying to figure out why the self-heal-daemon doesn’t seem to be
>> working, and what “unable to get index-dir” means. Any advice on what to
>> look at would be appreciated. Thanks!
>At any point did you have one node with 3.7.6 and another in 3.7.8 version?
Yes. I upgraded each
On Feb 29, 2016, at 5:43 PM, Alan Millar <grunthos...@yahoo.com> wrote:
> > I have tried with entry-self-heal/metdata-self-heal/data-self-heal set both
> > on and off; neither seems to make a difference.
>
> Correction: setting these to ON does fix the actual replic
> I have tried with entry-self-heal/metdata-self-heal/data-self-heal set both
> on and off; neither seems to make a difference.
Correction: setting these to ON does fix the actual replicated data. I checked
with md5sum on various files on both bricks, and it matches.
But it does not fix the
13 matches
Mail list logo