Step 10 isn't really necessary. The changes should probably be monitored
under the brick directory.
On Thu, Oct 6, 2016 at 10:25 PM, Sergei Gerasenko
wrote:
> I've simulated the problem on 4 VMs in a distributed replicated setup with
> a 2 replica-factor. I've repeatedly torn down and brought up
I've simulated the problem on 4 VMs in a distributed replicated setup with
a 2 replica-factor. I've repeatedly torn down and brought up a VM from a
snapshot in each of my tests.
What has worked so far is this:
1. Make a copy of /var/lib/glusterd from the affected machine, save it
elsewhere
here is the profile for about 30 seconds.. I didn't let it run a full 60:
Brick: media2-be:/gluster/brick1/gluster_volume_0
-
Cumulative Stats:
Block Size:512b+1024b+
2048b+
No. of Reads:0
this is the info file contents.. is there another file you would want to
see for config?
type=2
count=2
status=1
sub_count=2
stripe_count=1
replica_count=2
disperse_count=0
redundancy_count=0
version=3
transport-type=0
volume-id=98c258e6-ae9e-4407-8f25-7e3f7700e100
username=removed just cause
passw
Hi Mike,
Can you please share your gluster volume configuration?
Also do you notice anything in client logs on the node where fileio
backstore is configured?
Thanks,
Vijay
On Wed, Oct 5, 2016 at 8:56 PM, Michael Ciccarelli wrote:
> So I have a fairly basic setup using glusterfs between 2 nodes