On 1 March 2018 at 15:25, Jose V. Carrión wrote:
> I'm sorry for my last incomplete message.
>
> Below the output of both volumes:
>
> [root@stor1t ~]# gluster volume rebalance volumedisk1 status
> Node Rebalanced-files size
>
I'm sorry for my last incomplete message.
Below the output of both volumes:
[root@stor1t ~]# gluster volume rebalance volumedisk1 status
Node Rebalanced-files size
scanned failures skipped status run time in
h:m:s
Hi Nithya,
Below the output of both volumes:
[root@stor1t ~]# gluster volume rebalance volumedisk1 status
Node Rebalanced-files size
scanned failures skipped status run time in
h:m:s
-
Hi Jose,
On 28 February 2018 at 22:31, Jose V. Carrión wrote:
> Hi Nithya,
>
> My initial setup was composed of 2 similar nodes: stor1data and stor2data.
> A month ago I expanded both volumes with a new node: stor3data (2 bricks
> per volume).
> Of course, then to add the
Hi Nithya,
My initial setup was composed of 2 similar nodes: stor1data and stor2data.
A month ago I expanded both volumes with a new node: stor3data (2 bricks
per volume).
Of course, then to add the new peer with the bricks I did the 'balance
force' operation. This task finished successfully (you
Hi Jose,
On 28 February 2018 at 18:28, Jose V. Carrión wrote:
> Hi Nithya,
>
> I applied the workarround for this bug and now df shows the right size:
>
> That is good to hear.
> [root@stor1 ~]# df -h
> FilesystemSize Used Avail Use% Mounted on
> /dev/sdb1
Hi Nithya,
I applied the workarround for this bug and now df shows the right size:
[root@stor1 ~]# df -h
FilesystemSize Used Avail Use% Mounted on
/dev/sdb1 26T 1,1T 25T 4% /mnt/glusterfs/vol0
/dev/sdc1 50T 16T 34T 33% /mnt/glusterfs/vol1
Hi Jose,
There is a known issue with gluster 3.12.x builds (see [1]) so you may be
running into this.
The "shared-brick-count" values seem fine on stor1. Please send us "grep -n
"share" /var/lib/glusterd/vols/volumedisk1/*" results for the other nodes
so we can check if they are the cause.
Hi,
Some days ago all my glusterfs configuration was working fine. Today I
realized that the total size reported by df command was changed and is
smaller than the aggregated capacity of all the bricks in the volume.
I checked that all the volumes status are fine, all the glusterd daemons
are