Can you provide the output of gluster volume info before the remove-brick was done. It's not clear if you were reducing the replica count or removing a replica subvolume.
On Wed, Sep 11, 2019 at 4:23 PM <tosla...@yandex.ru> wrote: > Hi. > There is an ovirt-hosted-engine on gluster volume engine > Replicate replica count 3 > > Migrate to other drives. > I do: > gluster volume add-brick engine clemens:/gluster-bricks/engine > tiberius:/gluster-bricks/engine octavius:/gluster-bricks/engine force > volume add-brick: success > > gluster volume remove-brick engine tiberius:/engine/datastore > clemens:/engine/datastore octavius:/engine/datastore start > volume remove-brick start: success > ID: dd9453d3-b688-4ed8-ad37-ba901615046c > > gluster volume remove-brick engine octavius:/engine/datastore status > Node Rebalanced-files size scanned failures > skipped status run time in h:m:s > --------- ----------- ----------- ----------- ----------- > ----------- ------------ -------------- > localhost 7 50.0GB 34 1 > 0 completed 0:00:02 > clemens 11 21.0MB 31 0 > 0 completed 0:00:02 > tiberius 12 25.0MB 36 0 > 0 completed 0:00:02 > > gluster volume status engine > Status of volume: engine > Gluster process TCP Port RDMA Port Online > Pid > > ------------------------------------------------------------------------------ > Brick octavius:/engine/datastore 49156 0 Y > 15669 > Brick tiberius:/engine/datastore 49156 0 Y > 15930 > Brick clemens:/engine/datastore 49156 0 Y > 16193 > Brick clemens:/gluster-bricks/engine 49159 0 Y > 6168 > Brick tiberius:/gluster-bricks/engine 49163 0 Y > 29524 > Brick octavius:/gluster-bricks/engine 49159 0 Y > 50056 > Self-heal Daemon on localhost N/A N/A Y > 50087 > Self-heal Daemon on clemens N/A N/A Y > 6263 > Self-heal Daemon on tiberius N/A N/A Y > 29583 > > Task Status of Volume engine > > ------------------------------------------------------------------------------ > Task : Remove brick > ID : dd9453d3-b688-4ed8-ad37-ba901615046c > Removed bricks: > tiberius:/engine/datastore > clemens:/engine/datastore > octavius:/engine/datastore > Status : completed > > > But the data did not migrate > > du -hs /gluster-bricks/engine/ /engine/datastore/ > 49M /gluster-bricks/engine/ > 20G /engine/datastore/ > > Can you give some advice? > _______________________________________________ > Users mailing list -- users@ovirt.org > To unsubscribe send an email to users-le...@ovirt.org > Privacy Statement: https://www.ovirt.org/site/privacy-policy/ > oVirt Code of Conduct: > https://www.ovirt.org/community/about/community-guidelines/ > List Archives: > https://lists.ovirt.org/archives/list/users@ovirt.org/message/Q2KHWTOXBWO6FQKKTM36OIEMN2B6GYQK/ >
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/WD5AMFARITLPV3ZU2TSVKIH7A3OIP5EK/