Thanks for replay. I founded this values

datastore_temp.serda1.glusterfs-p1-b1.vol:    option shared-brick-count 2
datastore_temp.serda1.glusterfs-p2-b2.vol:    option shared-brick-count 2
datastore_temp.serda2.glusterfs-p1-b1.vol:    option shared-brick-count 0
datastore_temp.serda2.glusterfs-p2-b2.vol:    option shared-brick-count 0

I need to change the values of serda2 node?!



2018-02-23 13:07 GMT+01:00 Nithya Balachandran <nbala...@redhat.com>:

> Hi Daniele,
>
> Do you mean that the df -h output is incorrect for the volume post the
> upgrade?
>
> If yes and the bricks are on separate partitions, you might be running
> into [1]. Can you search for the string "option shared-brick-count" in
> the files in /var/lib/glusterd/vols/<volumename> and let us know the
> value? The workaround to get this working on the cluster is available in
> [2].
>
> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1517260
> [2] https://bugzilla.redhat.com/show_bug.cgi?id=1517260#c19
>
>
> Regards,
> Nithya
>
> On 23 February 2018 at 15:12, Daniele Frulla <daniele.fru...@gmail.com>
> wrote:
>
>> I have done a migration to new version of gluster but
>> when i doing a command
>>
>> df -h
>>
>> the result of space are minor of total.
>>
>> Configuration:
>> 2 peers
>>
>> Brick serda2:/glusterfs/p2/b2               49152     0          Y
>>  1560
>> Brick serda1:/glusterfs/p2/b2               49152     0          Y
>>  1462
>> Brick serda1:/glusterfs/p1/b1               49153     0          Y
>>  1476
>> Brick serda2:/glusterfs/p1/b1               49153     0          Y
>>  1566
>> Self-heal Daemon on localhost               N/A       N/A        Y
>>  1469
>> Self-heal Daemon on serda1                  N/A       N/A        Y
>>  1286
>>
>> Thanks
>>
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://lists.gluster.org/mailman/listinfo/gluster-users
>>
>
>
_______________________________________________
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Reply via email to