Guys, I have a working glusterfs volume. The servers were installed with
Debian 11. Here is the volume information:

Volume Name: pool-gluster01
Type: Replicate
Volume ID: ab9c0268-0942-495f-acca-9de567581a40
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
transport-type: tcp
Brick3: arbiter01:/brick1/pool-gluster01 (arbiter)
Options Reconfigured: full
cluster.favorite-child-policy: mtime 2
cluster.quorum-count: 1
cluster.quorum-reads: false
cluster.self-heal-daemon: enable
cluster.heal-timeout: 5
cluster.granular-entry-heal: enable
storage.fips-mode-rchecksum: on
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off

The client accessing the volume was installed on a ubuntu VM that is
running on a node of a proxmox cluster. When I migrate this VM to another
cluster node the gluster volume stops working. If I go back to the source
node it works again. Have you ever seen this happen?
André Probst
Consultor de Tecnologia
43 99617 8765

Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Gluster-users mailing list

Reply via email to