By that same token, you can also use a hostname with multiple A records
and glusterd will use those for failover to retrieve the vol file.
On 8/31/22 8:32 AM, Joe Julian wrote:
Kind-of. That just tells the client what other nodes it can use to
retrieve that volume configuration. It's only use
Kind-of. That just tells the client what other nodes it can use to
retrieve that volume configuration. It's only used during that initial
fetch.
On 8/31/22 8:26 AM, Péter Károly JUHÁSZ wrote:
You can also add the mount option: backupvolfile-server to let the
client know the other nodes.
Matt
You can also add the mount option: backupvolfile-server to let the client
know the other nodes.
Matthew J Black 于 2022年8月31日周三 17:21写道:
> Ah, it all now falls into place: I was unaware that the client receives
> that file upon initial contact with the cluster, and thus has that
> information at
Ah, it all now falls into place: I was unaware that the client receives
that file upon initial contact with the cluster, and thus has that
information at hand independently of the cluster nodes.
Thank you for taking the time to educate a poor newbie - it is very much
appreciated.
Cheers
Dul
You know when you do a `gluster volume info` and you get the whole
volume definition, the client graph is built from the same info. In
fact, if you look in /var/lib/glusterd/vols/$volume_name you'll find
some ".vol" files. `$volume_name.tcp-fuse.vol` is the configuration that
the clients receiv
Hi Joe,
Thanks for getting back to me about this, it was helpful, and I really
appreciate it.
I am, however, still (slightly) confused - *how* does the client "know"
the addresses of the other servers in the cluster (for read or write
purposes), when all the client has is the line in the fst
With a replica volume the client connects and writes to all the replicas
directly. For reads, when a filename is looked up the client checks with
all the replicas and, if the file is healthy, opens a read connection to
the first replica to respond (by default).
If a server is shut down, the cl
On 8/31/22 2:55 AM, duluxoz wrote:
what happens to client4:/data/gv1/file1 when gfs1 fails
In part, you can tell the client so explicitly.
mount.glusterfs(8) man page, 2nd instance's syntax:
SYNOPSIS
mount -t glusterfs [-o ]
:/[/]
mount -t glusterfs [-o ] ,,
,..:/[/]