Hi,
I have a gluster 3.12.6-1 installation with 2 configured volumes.
Several times at day , some bricks are reporting the lines below:
[2018-09-30 20:36:27.348015] W [MSGID: 113096]
[posix-handle.c:770:posix_handle_hard] 0-volumedisk0-posix: link
Hi,
I would like to implement a distributed-replicated architecture but some
of my nodes have different number of bricks. My idea is to do a replica 2
through 6 nodes (all bricks with the same size).
My gluster architecture is:
Name node | Brick 1 | Brick 2
to unify
the DHT range per brick ?.
Thanks a lot,
Greetings.
Jose V.
2018-03-01 10:39 GMT+01:00 Jose V. Carrión <jocar...@gmail.com>:
> Hi Nithya,
> Below the output of both volumes:
>
> [root@stor1t ~]# gluster volume rebalance volumedisk1 status
>
gluster volume add-brick volumedisk0 stor3data:/mnt/disk_b1/glusterfs/vol0
2018-03-01 6:32 GMT+01:00 Nithya Balachandran <nbala...@redhat.com>:
> Hi Jose,
>
> On 28 February 2018 at 22:31, Jose V. Carrión <jocar...@gmail.com> wrote:
>
>> Hi Nithya,
>>
&
us of Volume volumedisk1
--
Task : Rebalance
ID : d0048704-beeb-4a6a-ae94-7e7916423fd3
Status : completed
2018-02-28 15:40 GMT+01:00 Nithya Balachandran <nbala...@redhat.com>:
> Hi Jose,
>
> On 28
egards,
> Nithya
>
>
>
> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1517260
>
> On 28 February 2018 at 03:03, Jose V. Carrión <jocar...@gmail.com> wrote:
>
>> Hi,
>>
>> Some days ago all my glusterfs configuration was working fine. Today I
>>
Hi,
Some days ago all my glusterfs configuration was working fine. Today I
realized that the total size reported by df command was changed and is
smaller than the aggregated capacity of all the bricks in the volume.
I checked that all the volumes status are fine, all the glusterd daemons
are