Status looks good. Two master bricks are Active and participating in
syncing. Please let us know the issue you are observing.
regards
Aravinda
On 10/15/2015 11:40 AM, Wade Fitzpatrick wrote:
I have twice now tried to configure geo-replication of our
Stripe-Replicate volume to a remote Stripe
Oh, and yes. make sure to double check the .deb packages pls :) Last time
there was a bug, because of what volumes didn't start after upgrade. I know
these are made by volunteers, but devs should give them appropriate signals
I believe.
2015-10-15 11:29 GMT+03:00 Mauro M. :
Slave will be eventually consistent. If rsync created temp files in
Master Volume and renamed, that gets recorded in Changelogs(Journal).
Exact same steps will be replayed in Slave Volume. If no errors, Geo-rep
should unlink temp files in Slave and retain actual files.
Let us know if Issue
Well I'm kind of worried about the 3 million failures listed in the
FAILURES column, the timestamp showing that syncing "stalled" 2 days ago
and the fact that only half of the files have been transferred to the
remote volume.
On 15/10/2015 9:27 pm, Aravinda wrote:
Status looks good. Two
Hi Atin,
First of all I am not sure how the datacenter cluster nodes boot.
My assumption is that cluster nodes boot from their local disk. Let
suppose each node has some number of HDDs and one of that is boot disk.
Gluster is running on each node to distribute the storage(except boot
disk).
Hi guys,
In document,
https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3.1/html/Administration_Guide/chap-Managing_Snapshots.html#Prerequisites37
Prerequisites say that: "Only *linear LVM* is supported with Red Hat
Gluster Storage 3.0."
Is there any problem if I build
To date my experience with upgrades has been a disaster in that in two
cases I was unable to start my volume and eventually I had to downgrade.
What I want to recommend is that there is an EXTENSIVE REGRESSION TEST.
The most important goal is that NOTHING that works with the previous
release
Thanks!
As near as I can tell, the GlusterFS thinks it's done -- I finally ended
up renaming the files myself after waiting a couple of days.
If I take an idle master/slave (no pending writes) and do an rsync to
copy a file to the master volume, I can see that the file is otherwise
correct
I now have a situation similar to
https://bugzilla.redhat.com/show_bug.cgi?id=1202649 but trying to
register to report the bug, I don't receive the confirmation email to my
account so I can't register.
Stopping and starting geo-replication has no effect and in fact now
shows no status at
On 15 October 2015 at 17:26, Udo Giacomozzi
wrote:
> My problem is, that every time I reboot one of the nodes, Gluster starts
> healing all of the files. Since they are quite big, it takes up to ~15-30
> minutes to complete. It completes successfully, but I have to be
On 15 October 2015 at 11:15, Pranith Kumar Karampuri
wrote:
> Okay, so re-installation is going to change root partition, but the brick
> data is going to remain intact, am I correct? Are you going to stop the
> volume, re-install all the machines in cluster and bring them
Probably a good question on gluster-users (CCed)
Pranith
On 10/14/2015 03:57 AM, Brian Lahoue wrote:
Has anyone tested backing up a fairly large Gluster implementation
with Amanda/ZManda recently?
___
Gluster-devel mailing list
+gluster-users
On 10/16/2015 03:27 AM, Nir Soffer wrote:
This is a good qeustion for gluster mailing list.
בתאריך 15 באוק׳ 2015 4:07 אחה״צ, "Nathanaël Blanchet"
> כתב:
Hello,
I noticed after several different installations that the
13 matches
Mail list logo