Re: [Gluster-users] How Does Gluster Failover

2022-08-31 Thread Joe Julian
By that same token, you can also use a hostname with multiple A records 
and glusterd will use those for failover to retrieve the vol file.


On 8/31/22 8:32 AM, Joe Julian wrote:


Kind-of. That just tells the client what other nodes it can use to 
retrieve that volume configuration. It's only used during that initial 
fetch.


On 8/31/22 8:26 AM, Péter Károly JUHÁSZ wrote:
You can also add the mount option: backupvolfile-server to let the 
client know the other nodes.


Matthew J Black  于 2022年8月31日周三 17:21写道:

Ah, it all now falls into place: I was unaware that the client
receives that file upon initial contact with the cluster, and
thus has that information at hand independently of the cluster nodes.

Thank you for taking the time to educate a poor newbie - it is
very much appreciated.

Cheers

Dulux-Oz

On 01/09/2022 01:16, Joe Julian wrote:


You know when you do a `gluster volume info` and you get the
whole volume definition, the client graph is built from the same
info. In fact, if you look in
/var/lib/glusterd/vols/$volume_name you'll find some ".vol"
files. `$volume_name.tcp-fuse.vol` is the configuration that the
clients receive from whichever server they initially connect to.
You'll notice that file has multiple "type/client" sections,
each establishing a tcp connection to a server.

Sidenote: You can also see in that file, how the microkernels
are used to build all the logic that forms the volume, which is
kinda cool. Back when I first started using gluster, there was
no glusterd and you have to build those .vol files by hand.

On 8/31/22 8:04 AM, Matthew J Black wrote:


Hi Joe,

Thanks for getting back to me about this, it was helpful, and I
really appreciate it.

I am, however, still (slightly) confused - *how* does the
client "know" the addresses of the other servers in the cluster
(for read or write purposes), when all the client has is the
line in the fstab file: "gfs1:gv1  /data/gv1  glusterfs
defaults  0 2"? I'm missing something, somewhere, in all of
this, and I can't work out what that "something" is.  :-)

Your help truely is appreciated

Cheers

Dulux-Oz

On 01/09/2022 00:55, Joe Julian wrote:


With a replica volume the client connects and writes to all
the replicas directly. For reads, when a filename is looked up
the client checks with all the replicas and, if the file is
healthy, opens a read connection to the first replica to
respond (by default).

If a server is shut down, the client receives the tcp messages
that close the connection. For read operations, it chooses the
next server. Writes will just continue to the remaining
replicas (metadata is stored in extended attributes to inform
future lookups and the self-healer of file health).

If a server crashes (no tcp finalization) the volume will
pause for ping-timeout seconds (42 by default). Then continue
as above. BTW, that 42 second timeout shouldn't be a big deal.
The MTBF should be sufficiently far apart that this should
still easily get you five or six nines.

On 8/30/22 11:55 PM, duluxoz wrote:


Hi Guys & Gals,

A Gluster newbie question for sure, but something I just
don't "get" (or I've missed in the doco, mailing lists, etc):

What happens to a Gluster Client when a Gluster Cluster Node
goes off-line / fails-over?

How does the Client "know" to use (connect to) another
Gluster Node in the Gluster Cluster?

Let me elaborate.

I've got four hosts: gfs1, gfs2, gfs3, and client4 sitting on
192.168.1.1/24 , .2, .3, and .4
respectively.

DNS is set up and working correctly.

gfs1, gs2, and gfs3 form a "Gluster Cluster" with a Gluster
Volume (gv1) replicated across all three nodes. This is all
working correctly (ie a file (file1) created/modified on
gfs1:/gv1 is replicated correctly to gfs2:/gv1 and gfs3:/gv1).

client4 has an entry in its /etc/fstab file which reads:
"gfs1:gv1  /data/gv1  glusterfs  defaults  0 2". This is also
all working correctly (ie client4:/data/gv1/file1 is
accessible and replicated).

So, (and I haven't tested this yet) what happens to
client4:/data/gv1/file1 when gfs1 fails (ie is turned off,
crashes, etc)?

Does client4 "automatically" switch to using one of the other
two Gluster Nodes, or do I have something wrong in clients4's
/etc/fstab file, or an error/mis-configuration somewhere else?

I thought about setting some DNS entries along the lines of:

~~~

glustercluster IN  A  192.168.0.1

glustercluster IN  A  192.168.0.2

glustercluster  IN  A 192.168.0.3

~~~

and having clients4's /etc/fstab file read:
"glustercluster:gv1  /data/gv1  glusterfs defaults  0 2", but
this is a Round-Robin DNS config and I'm not sure how Gluster
treat

Re: [Gluster-users] How Does Gluster Failover

2022-08-31 Thread Joe Julian
Kind-of. That just tells the client what other nodes it can use to 
retrieve that volume configuration. It's only used during that initial 
fetch.


On 8/31/22 8:26 AM, Péter Károly JUHÁSZ wrote:
You can also add the mount option: backupvolfile-server to let the 
client know the other nodes.


Matthew J Black  于 2022年8月31日周三 17:21写道:

Ah, it all now falls into place: I was unaware that the client
receives that file upon initial contact with the cluster, and thus
has that information at hand independently of the cluster nodes.

Thank you for taking the time to educate a poor newbie - it is
very much appreciated.

Cheers

Dulux-Oz

On 01/09/2022 01:16, Joe Julian wrote:


You know when you do a `gluster volume info` and you get the
whole volume definition, the client graph is built from the same
info. In fact, if you look in /var/lib/glusterd/vols/$volume_name
you'll find some ".vol" files. `$volume_name.tcp-fuse.vol` is the
configuration that the clients receive from whichever server they
initially connect to. You'll notice that file has multiple
"type/client" sections, each establishing a tcp connection to a
server.

Sidenote: You can also see in that file, how the microkernels are
used to build all the logic that forms the volume, which is kinda
cool. Back when I first started using gluster, there was no
glusterd and you have to build those .vol files by hand.

On 8/31/22 8:04 AM, Matthew J Black wrote:


Hi Joe,

Thanks for getting back to me about this, it was helpful, and I
really appreciate it.

I am, however, still (slightly) confused - *how* does the client
"know" the addresses of the other servers in the cluster (for
read or write purposes), when all the client has is the line in
the fstab file: "gfs1:gv1  /data/gv1  glusterfs defaults  0 2"?
I'm missing something, somewhere, in all of this, and I can't
work out what that "something" is.  :-)

Your help truely is appreciated

Cheers

Dulux-Oz

On 01/09/2022 00:55, Joe Julian wrote:


With a replica volume the client connects and writes to all the
replicas directly. For reads, when a filename is looked up the
client checks with all the replicas and, if the file is
healthy, opens a read connection to the first replica to
respond (by default).

If a server is shut down, the client receives the tcp messages
that close the connection. For read operations, it chooses the
next server. Writes will just continue to the remaining
replicas (metadata is stored in extended attributes to inform
future lookups and the self-healer of file health).

If a server crashes (no tcp finalization) the volume will pause
for ping-timeout seconds (42 by default). Then continue as
above. BTW, that 42 second timeout shouldn't be a big deal. The
MTBF should be sufficiently far apart that this should still
easily get you five or six nines.

On 8/30/22 11:55 PM, duluxoz wrote:


Hi Guys & Gals,

A Gluster newbie question for sure, but something I just don't
"get" (or I've missed in the doco, mailing lists, etc):

What happens to a Gluster Client when a Gluster Cluster Node
goes off-line / fails-over?

How does the Client "know" to use (connect to) another Gluster
Node in the Gluster Cluster?

Let me elaborate.

I've got four hosts: gfs1, gfs2, gfs3, and client4 sitting on
192.168.1.1/24 , .2, .3, and .4
respectively.

DNS is set up and working correctly.

gfs1, gs2, and gfs3 form a "Gluster Cluster" with a Gluster
Volume (gv1) replicated across all three nodes. This is all
working correctly (ie a file (file1) created/modified on
gfs1:/gv1 is replicated correctly to gfs2:/gv1 and gfs3:/gv1).

client4 has an entry in its /etc/fstab file which reads:
"gfs1:gv1  /data/gv1  glusterfs  defaults  0 2". This is also
all working correctly (ie client4:/data/gv1/file1 is
accessible and replicated).

So, (and I haven't tested this yet) what happens to
client4:/data/gv1/file1 when gfs1 fails (ie is turned off,
crashes, etc)?

Does client4 "automatically" switch to using one of the other
two Gluster Nodes, or do I have something wrong in clients4's
/etc/fstab file, or an error/mis-configuration somewhere else?

I thought about setting some DNS entries along the lines of:

~~~

glustercluster IN  A  192.168.0.1

glustercluster  IN  A 192.168.0.2

glustercluster  IN  A 192.168.0.3

~~~

and having clients4's /etc/fstab file read:
"glustercluster:gv1  /data/gv1  glusterfs defaults  0 2", but
this is a Round-Robin DNS config and I'm not sure how Gluster
treats this situation.

So, if people could comment / point me in the correct
direction I would really appreciate it - thanks.

Dulux-Oz






Community Meetin

Re: [Gluster-users] How Does Gluster Failover

2022-08-31 Thread Péter Károly JUHÁSZ
You can also add the mount option: backupvolfile-server to let the client
know the other nodes.

Matthew J Black  于 2022年8月31日周三 17:21写道:

> Ah, it all now falls into place: I was unaware that the client receives
> that file upon initial contact with the cluster, and thus has that
> information at hand independently of the cluster nodes.
>
> Thank you for taking the time to educate a poor newbie - it is very much
> appreciated.
>
> Cheers
>
> Dulux-Oz
> On 01/09/2022 01:16, Joe Julian wrote:
>
> You know when you do a `gluster volume info` and you get the whole volume
> definition, the client graph is built from the same info. In fact, if you
> look in /var/lib/glusterd/vols/$volume_name you'll find some ".vol" files.
> `$volume_name.tcp-fuse.vol` is the configuration that the clients receive
> from whichever server they initially connect to. You'll notice that file
> has multiple "type/client" sections, each establishing a tcp connection to
> a server.
>
> Sidenote: You can also see in that file, how the microkernels are used to
> build all the logic that forms the volume, which is kinda cool. Back when I
> first started using gluster, there was no glusterd and you have to build
> those .vol files by hand.
> On 8/31/22 8:04 AM, Matthew J Black wrote:
>
> Hi Joe,
>
> Thanks for getting back to me about this, it was helpful, and I really
> appreciate it.
>
> I am, however, still (slightly) confused - *how* does the client "know"
> the addresses of the other servers in the cluster (for read or write
> purposes), when all the client has is the line in the fstab file: "gfs1:gv1
> /data/gv1  glusterfs  defaults  0 2"? I'm missing something, somewhere,
> in all of this, and I can't work out what that "something" is.  :-)
>
> Your help truely is appreciated
>
> Cheers
>
> Dulux-Oz
> On 01/09/2022 00:55, Joe Julian wrote:
>
> With a replica volume the client connects and writes to all the replicas
> directly. For reads, when a filename is looked up the client checks with
> all the replicas and, if the file is healthy, opens a read connection to
> the first replica to respond (by default).
>
> If a server is shut down, the client receives the tcp messages that close
> the connection. For read operations, it chooses the next server. Writes
> will just continue to the remaining replicas (metadata is stored in
> extended attributes to inform future lookups and the self-healer of file
> health).
>
> If a server crashes (no tcp finalization) the volume will pause for
> ping-timeout seconds (42 by default). Then continue as above. BTW, that 42
> second timeout shouldn't be a big deal. The MTBF should be sufficiently far
> apart that this should still easily get you five or six nines.
> On 8/30/22 11:55 PM, duluxoz wrote:
>
> Hi Guys & Gals,
>
> A Gluster newbie question for sure, but something I just don't "get" (or
> I've missed in the doco, mailing lists, etc):
>
> What happens to a Gluster Client when a Gluster Cluster Node goes off-line
> / fails-over?
>
> How does the Client "know" to use (connect to) another Gluster Node in the
> Gluster Cluster?
>
> Let me elaborate.
>
> I've got four hosts: gfs1, gfs2, gfs3, and client4 sitting on
> 192.168.1.1/24, .2, .3, and .4 respectively.
>
> DNS is set up and working correctly.
>
> gfs1, gs2, and gfs3 form a "Gluster Cluster" with a Gluster Volume (gv1)
> replicated across all three nodes. This is all working correctly (ie a file
> (file1) created/modified on gfs1:/gv1 is replicated correctly to gfs2:/gv1
> and gfs3:/gv1).
>
> client4 has an entry in its /etc/fstab file which reads: "gfs1:gv1
> /data/gv1  glusterfs  defaults  0 2". This is also all working correctly
> (ie client4:/data/gv1/file1 is accessible and replicated).
>
> So, (and I haven't tested this yet) what happens to client4:/data/gv1/file1
> when gfs1 fails (ie is turned off, crashes, etc)?
>
> Does client4 "automatically" switch to using one of the other two Gluster
> Nodes, or do I have something wrong in clients4's /etc/fstab file, or an
> error/mis-configuration somewhere else?
>
> I thought about setting some DNS entries along the lines of:
>
> ~~~
>
> glustercluster  IN  A  192.168.0.1
>
> glustercluster  IN  A  192.168.0.2
>
> glustercluster  IN  A  192.168.0.3
>
> ~~~
>
> and having clients4's /etc/fstab file read: "glustercluster:gv1
> /data/gv1  glusterfs  defaults  0 2", but this is a Round-Robin DNS
> config and I'm not sure how Gluster treats this situation.
>
> So, if people could comment / point me in the correct direction I would
> really appreciate it - thanks.
>
> Dulux-Oz
>
> 
>
>
>
> Community Meeting Calendar:
>
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://meet.google.com/cpu-eiue-hvk
> Gluster-users mailing 
> listGluster-users@gluster.orghttps://lists.gluster.org/mailman/listinfo/gluster-users
>
>
> [image: width=]
> 
> Virus-fre

Re: [Gluster-users] How Does Gluster Failover

2022-08-31 Thread Matthew J Black
Ah, it all now falls into place: I was unaware that the client receives 
that file upon initial contact with the cluster, and thus has that 
information at hand independently of the cluster nodes.


Thank you for taking the time to educate a poor newbie - it is very much 
appreciated.


Cheers

Dulux-Oz

PEREGRINE IT Signature On 01/09/2022 01:16, Joe Julian wrote:


You know when you do a `gluster volume info` and you get the whole 
volume definition, the client graph is built from the same info. In 
fact, if you look in /var/lib/glusterd/vols/$volume_name you'll find 
some ".vol" files. `$volume_name.tcp-fuse.vol` is the configuration 
that the clients receive from whichever server they initially connect 
to. You'll notice that file has multiple "type/client" sections, each 
establishing a tcp connection to a server.


Sidenote: You can also see in that file, how the microkernels are used 
to build all the logic that forms the volume, which is kinda cool. 
Back when I first started using gluster, there was no glusterd and you 
have to build those .vol files by hand.


On 8/31/22 8:04 AM, Matthew J Black wrote:


Hi Joe,

Thanks for getting back to me about this, it was helpful, and I 
really appreciate it.


I am, however, still (slightly) confused - *how* does the client 
"know" the addresses of the other servers in the cluster (for read or 
write purposes), when all the client has is the line in the fstab 
file: "gfs1:gv1 /data/gv1  glusterfs  defaults  0 2"? I'm missing 
something, somewhere, in all of this, and I can't work out what that 
"something" is.  :-)


Your help truely is appreciated

Cheers

Dulux-Oz

PEREGRINE IT Signature On 01/09/2022 00:55, Joe Julian wrote:


With a replica volume the client connects and writes to all the 
replicas directly. For reads, when a filename is looked up the 
client checks with all the replicas and, if the file is healthy, 
opens a read connection to the first replica to respond (by default).


If a server is shut down, the client receives the tcp messages that 
close the connection. For read operations, it chooses the next 
server. Writes will just continue to the remaining replicas 
(metadata is stored in extended attributes to inform future lookups 
and the self-healer of file health).


If a server crashes (no tcp finalization) the volume will pause for 
ping-timeout seconds (42 by default). Then continue as above. BTW, 
that 42 second timeout shouldn't be a big deal. The MTBF should be 
sufficiently far apart that this should still easily get you five or 
six nines.


On 8/30/22 11:55 PM, duluxoz wrote:


Hi Guys & Gals,

A Gluster newbie question for sure, but something I just don't 
"get" (or I've missed in the doco, mailing lists, etc):


What happens to a Gluster Client when a Gluster Cluster Node goes 
off-line / fails-over?


How does the Client "know" to use (connect to) another Gluster Node 
in the Gluster Cluster?


Let me elaborate.

I've got four hosts: gfs1, gfs2, gfs3, and client4 sitting on 
192.168.1.1/24, .2, .3, and .4 respectively.


DNS is set up and working correctly.

gfs1, gs2, and gfs3 form a "Gluster Cluster" with a Gluster Volume 
(gv1) replicated across all three nodes. This is all working 
correctly (ie a file (file1) created/modified on gfs1:/gv1 is 
replicated correctly to gfs2:/gv1 and gfs3:/gv1).


client4 has an entry in its /etc/fstab file which reads: "gfs1:gv1  
/data/gv1 glusterfs  defaults  0 2". This is also all working 
correctly (ie client4:/data/gv1/file1 is accessible and replicated).


So, (and I haven't tested this yet) what happens to 
client4:/data/gv1/file1 when gfs1 fails (ie is turned off, crashes, 
etc)?


Does client4 "automatically" switch to using one of the other two 
Gluster Nodes, or do I have something wrong in clients4's 
/etc/fstab file, or an error/mis-configuration somewhere else?


I thought about setting some DNS entries along the lines of:

~~~

glustercluster  IN  A 192.168.0.1

glustercluster  IN  A  192.168.0.2

glustercluster IN  A  192.168.0.3

~~~

and having clients4's /etc/fstab file read: "glustercluster:gv1  
/data/gv1  glusterfs  defaults 0 2", but this is a Round-Robin DNS 
config and I'm not sure how Gluster treats this situation.


So, if people could comment / point me in the correct direction I 
would really appreciate it - thanks.


Dulux-Oz






Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge:https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


width= 
 
	Virus-free.www.avast.com 
 



<#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>


--
This email has been checked for viruses by Avast antivirus software.
ww

Re: [Gluster-users] How Does Gluster Failover

2022-08-31 Thread Joe Julian
You know when you do a `gluster volume info` and you get the whole 
volume definition, the client graph is built from the same info. In 
fact, if you look in /var/lib/glusterd/vols/$volume_name you'll find 
some ".vol" files. `$volume_name.tcp-fuse.vol` is the configuration that 
the clients receive from whichever server they initially connect to. 
You'll notice that file has multiple "type/client" sections, each 
establishing a tcp connection to a server.


Sidenote: You can also see in that file, how the microkernels are used 
to build all the logic that forms the volume, which is kinda cool. Back 
when I first started using gluster, there was no glusterd and you have 
to build those .vol files by hand.


On 8/31/22 8:04 AM, Matthew J Black wrote:


Hi Joe,

Thanks for getting back to me about this, it was helpful, and I really 
appreciate it.


I am, however, still (slightly) confused - *how* does the client 
"know" the addresses of the other servers in the cluster (for read or 
write purposes), when all the client has is the line in the fstab 
file: "gfs1:gv1  /data/gv1  glusterfs defaults  0 2"? I'm missing 
something, somewhere, in all of this, and I can't work out what that 
"something" is.  :-)


Your help truely is appreciated

Cheers

Dulux-Oz

PEREGRINE IT Signature On 01/09/2022 00:55, Joe Julian wrote:


With a replica volume the client connects and writes to all the 
replicas directly. For reads, when a filename is looked up the client 
checks with all the replicas and, if the file is healthy, opens a 
read connection to the first replica to respond (by default).


If a server is shut down, the client receives the tcp messages that 
close the connection. For read operations, it chooses the next 
server. Writes will just continue to the remaining replicas (metadata 
is stored in extended attributes to inform future lookups and the 
self-healer of file health).


If a server crashes (no tcp finalization) the volume will pause for 
ping-timeout seconds (42 by default). Then continue as above. BTW, 
that 42 second timeout shouldn't be a big deal. The MTBF should be 
sufficiently far apart that this should still easily get you five or 
six nines.


On 8/30/22 11:55 PM, duluxoz wrote:


Hi Guys & Gals,

A Gluster newbie question for sure, but something I just don't "get" 
(or I've missed in the doco, mailing lists, etc):


What happens to a Gluster Client when a Gluster Cluster Node goes 
off-line / fails-over?


How does the Client "know" to use (connect to) another Gluster Node 
in the Gluster Cluster?


Let me elaborate.

I've got four hosts: gfs1, gfs2, gfs3, and client4 sitting on 
192.168.1.1/24, .2, .3, and .4 respectively.


DNS is set up and working correctly.

gfs1, gs2, and gfs3 form a "Gluster Cluster" with a Gluster Volume 
(gv1) replicated across all three nodes. This is all working 
correctly (ie a file (file1) created/modified on gfs1:/gv1 is 
replicated correctly to gfs2:/gv1 and gfs3:/gv1).


client4 has an entry in its /etc/fstab file which reads: "gfs1:gv1  
/data/gv1 glusterfs  defaults  0 2". This is also all working 
correctly (ie client4:/data/gv1/file1 is accessible and replicated).


So, (and I haven't tested this yet) what happens to 
client4:/data/gv1/file1 when gfs1 fails (ie is turned off, crashes, 
etc)?


Does client4 "automatically" switch to using one of the other two 
Gluster Nodes, or do I have something wrong in clients4's /etc/fstab 
file, or an error/mis-configuration somewhere else?


I thought about setting some DNS entries along the lines of:

~~~

glustercluster  IN  A 192.168.0.1

glustercluster  IN  A  192.168.0.2

glustercluster IN  A  192.168.0.3

~~~

and having clients4's /etc/fstab file read: "glustercluster:gv1  
/data/gv1  glusterfs  defaults  0 2", but this is a Round-Robin DNS 
config and I'm not sure how Gluster treats this situation.


So, if people could comment / point me in the correct direction I 
would really appreciate it - thanks.


Dulux-Oz






Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge:https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


width= 
 
	Virus-free.www.avast.com 
 



<#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>



Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] How Does Gluster Failover

2022-08-31 Thread Matthew J Black

Hi Joe,

Thanks for getting back to me about this, it was helpful, and I really 
appreciate it.


I am, however, still (slightly) confused - *how* does the client "know" 
the addresses of the other servers in the cluster (for read or write 
purposes), when all the client has is the line in the fstab file: 
"gfs1:gv1  /data/gv1  glusterfs defaults  0 2"? I'm missing something, 
somewhere, in all of this, and I can't work out what that "something" 
is.  :-)


Your help truely is appreciated

Cheers

Dulux-Oz

PEREGRINE IT Signature On 01/09/2022 00:55, Joe Julian wrote:


With a replica volume the client connects and writes to all the 
replicas directly. For reads, when a filename is looked up the client 
checks with all the replicas and, if the file is healthy, opens a read 
connection to the first replica to respond (by default).


If a server is shut down, the client receives the tcp messages that 
close the connection. For read operations, it chooses the next server. 
Writes will just continue to the remaining replicas (metadata is 
stored in extended attributes to inform future lookups and the 
self-healer of file health).


If a server crashes (no tcp finalization) the volume will pause for 
ping-timeout seconds (42 by default). Then continue as above. BTW, 
that 42 second timeout shouldn't be a big deal. The MTBF should be 
sufficiently far apart that this should still easily get you five or 
six nines.


On 8/30/22 11:55 PM, duluxoz wrote:


Hi Guys & Gals,

A Gluster newbie question for sure, but something I just don't "get" 
(or I've missed in the doco, mailing lists, etc):


What happens to a Gluster Client when a Gluster Cluster Node goes 
off-line / fails-over?


How does the Client "know" to use (connect to) another Gluster Node 
in the Gluster Cluster?


Let me elaborate.

I've got four hosts: gfs1, gfs2, gfs3, and client4 sitting on 
192.168.1.1/24, .2, .3, and .4 respectively.


DNS is set up and working correctly.

gfs1, gs2, and gfs3 form a "Gluster Cluster" with a Gluster Volume 
(gv1) replicated across all three nodes. This is all working 
correctly (ie a file (file1) created/modified on gfs1:/gv1 is 
replicated correctly to gfs2:/gv1 and gfs3:/gv1).


client4 has an entry in its /etc/fstab file which reads: "gfs1:gv1  
/data/gv1 glusterfs  defaults  0 2". This is also all working 
correctly (ie client4:/data/gv1/file1 is accessible and replicated).


So, (and I haven't tested this yet) what happens to 
client4:/data/gv1/file1 when gfs1 fails (ie is turned off, crashes, etc)?


Does client4 "automatically" switch to using one of the other two 
Gluster Nodes, or do I have something wrong in clients4's /etc/fstab 
file, or an error/mis-configuration somewhere else?


I thought about setting some DNS entries along the lines of:

~~~

glustercluster  IN  A 192.168.0.1

glustercluster  IN  A  192.168.0.2

glustercluster IN  A  192.168.0.3

~~~

and having clients4's /etc/fstab file read: "glustercluster:gv1  
/data/gv1  glusterfs  defaults  0 2", but this is a Round-Robin DNS 
config and I'm not sure how Gluster treats this situation.


So, if people could comment / point me in the correct direction I 
would really appreciate it - thanks.


Dulux-Oz






Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge:https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


--
This email has been checked for viruses by Avast antivirus software.
www.avast.com



Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] How Does Gluster Failover

2022-08-31 Thread Joe Julian
With a replica volume the client connects and writes to all the replicas 
directly. For reads, when a filename is looked up the client checks with 
all the replicas and, if the file is healthy, opens a read connection to 
the first replica to respond (by default).


If a server is shut down, the client receives the tcp messages that 
close the connection. For read operations, it chooses the next server. 
Writes will just continue to the remaining replicas (metadata is stored 
in extended attributes to inform future lookups and the self-healer of 
file health).


If a server crashes (no tcp finalization) the volume will pause for 
ping-timeout seconds (42 by default). Then continue as above. BTW, that 
42 second timeout shouldn't be a big deal. The MTBF should be 
sufficiently far apart that this should still easily get you five or six 
nines.


On 8/30/22 11:55 PM, duluxoz wrote:


Hi Guys & Gals,

A Gluster newbie question for sure, but something I just don't "get" 
(or I've missed in the doco, mailing lists, etc):


What happens to a Gluster Client when a Gluster Cluster Node goes 
off-line / fails-over?


How does the Client "know" to use (connect to) another Gluster Node in 
the Gluster Cluster?


Let me elaborate.

I've got four hosts: gfs1, gfs2, gfs3, and client4 sitting on 
192.168.1.1/24, .2, .3, and .4 respectively.


DNS is set up and working correctly.

gfs1, gs2, and gfs3 form a "Gluster Cluster" with a Gluster Volume 
(gv1) replicated across all three nodes. This is all working correctly 
(ie a file (file1) created/modified on gfs1:/gv1 is replicated 
correctly to gfs2:/gv1 and gfs3:/gv1).


client4 has an entry in its /etc/fstab file which reads: "gfs1:gv1  
/data/gv1 glusterfs  defaults  0 2". This is also all working 
correctly (ie client4:/data/gv1/file1 is accessible and replicated).


So, (and I haven't tested this yet) what happens to 
client4:/data/gv1/file1 when gfs1 fails (ie is turned off, crashes, etc)?


Does client4 "automatically" switch to using one of the other two 
Gluster Nodes, or do I have something wrong in clients4's /etc/fstab 
file, or an error/mis-configuration somewhere else?


I thought about setting some DNS entries along the lines of:

~~~

glustercluster  IN  A 192.168.0.1

glustercluster  IN  A  192.168.0.2

glustercluster IN  A  192.168.0.3

~~~

and having clients4's /etc/fstab file read: "glustercluster:gv1 
/data/gv1  glusterfs  defaults  0 2", but this is a Round-Robin DNS 
config and I'm not sure how Gluster treats this situation.


So, if people could comment / point me in the correct direction I 
would really appreciate it - thanks.


Dulux-Oz






Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge:https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users



Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] How Does Gluster Failover

2022-08-31 Thread Karl Kleinpaste

On 8/31/22 2:55 AM, duluxoz wrote:

what happens to client4:/data/gv1/file1 when gfs1 fails


In part, you can tell the client so explicitly.
mount.glusterfs(8) man page, 2nd instance's syntax:

SYNOPSIS
   mount -t glusterfs [-o ] 
:/[/] 


   mount -t glusterfs [-o ] ,,
,..:/[/] 

   mount -t glusterfs [-o ]  




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] How Does Gluster Failover

2022-08-31 Thread duluxoz

Hi Guys & Gals,

A Gluster newbie question for sure, but something I just don't "get" (or 
I've missed in the doco, mailing lists, etc):


What happens to a Gluster Client when a Gluster Cluster Node goes 
off-line / fails-over?


How does the Client "know" to use (connect to) another Gluster Node in 
the Gluster Cluster?


Let me elaborate.

I've got four hosts: gfs1, gfs2, gfs3, and client4 sitting on 
192.168.1.1/24, .2, .3, and .4 respectively.


DNS is set up and working correctly.

gfs1, gs2, and gfs3 form a "Gluster Cluster" with a Gluster Volume (gv1) 
replicated across all three nodes. This is all working correctly (ie a 
file (file1) created/modified on gfs1:/gv1 is replicated correctly to 
gfs2:/gv1 and gfs3:/gv1).


client4 has an entry in its /etc/fstab file which reads: "gfs1:gv1  
/data/gv1  glusterfs defaults  0 2". This is also all working correctly 
(ie client4:/data/gv1/file1 is accessible and replicated).


So, (and I haven't tested this yet) what happens to 
client4:/data/gv1/file1 when gfs1 fails (ie is turned off, crashes, etc)?


Does client4 "automatically" switch to using one of the other two 
Gluster Nodes, or do I have something wrong in clients4's /etc/fstab 
file, or an error/mis-configuration somewhere else?


I thought about setting some DNS entries along the lines of:

~~~

glustercluster  IN  A 192.168.0.1

glustercluster  IN  A  192.168.0.2

glustercluster IN  A  192.168.0.3

~~~

and having clients4's /etc/fstab file read: "glustercluster:gv1 
/data/gv1  glusterfs  defaults  0 2", but this is a Round-Robin DNS 
config and I'm not sure how Gluster treats this situation.


So, if people could comment / point me in the correct direction I would 
really appreciate it - thanks.


Dulux-Oz




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Load balancing

2022-08-31 Thread Nguetchouang Ngongang Kevin
Hello everyone, 

I'm working with glusterfs and i have a question. 

Is it possible to achieve I/O request load balancing in glusterfs ? 

If yes how is it possible.

-- 
Nguetchouang Ngongang Kevin
ENS de Lyon
https://perso.ens-lyon.fr/kevin.nguetchouang/



Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Directory in split brain does not heal - Gfs 9.2

2022-08-31 Thread Ilias Chasapakis forumZFD

Hi all,

so we went further and deleted the entries (data and gfid). The split 
brain is now gone, but when we triggered a heal again (simple and full) 
we have many entries stuck in healing (no split-brain items). They are 
there since days/weeks and still appearing.


We would like to heal single files but as they are not in split brain I 
guess this is not possible right? The "source-brick" technique works 
only in that case I think?


A concrete example of one of that files that are stuck in the healing 
queue: I checked the attributes with getfattr and saw that one of the 
nodes does not have nor the data or the gfid. Missing completely. How 
could I trigger a replication from the "good copy" to the gluster node 
that does not have the file? Is it possible for entries *not* in split 
brain? Doing a listing on the mount side (ls) of the affected directory 
did not seem to trigger a heal.


Also the shd logs have some ambiguous (for me) entries. The sink value 
is empty, shouldn´t it be a number indicating it is healing?


[2022-08-28 17:22:11.098604 +] I [MSGID: 108026] 
[afr-self-heal-common.c:1742:afr_log_selfheal] 0-vol-replicate-0: 
Completed metadata selfheal on 94503c97-7731-4aa1-8a14-2c6ea5a84a15. 
sources=1 [2]  sinks=
[2022-08-28 17:22:16.227091 +] I [MSGID: 108026] 
[afr-self-heal-common.c:1742:afr_log_selfheal] 0-gv-ho-replicate-0: 
Completed metadata selfheal on 94503c97-7731-4aa1-8a14-2c6ea5a84a15. 
sources=1 [2]  sinks= 


I try to use the guide here:

https://docs.gluster.org/en/main/Troubleshooting/troubleshooting-afr/#ii-self-heal-is-stuck-not-getting-completed

but find difficult to apply.

Do you have any suggestions on how to "unblock" these stuck entries and 
what is a methodic approach to troubleshooting this situation?


Finally I would like to ask if the risk of updating the glusters (we 
have pending updates now) would be too dangerous without previously 
fixing the unhealed entries. Our hope is that an update could eventually 
fix the problems.


Best regards.
Ilias


Am 18.08.22 um 23:38 schrieb Strahil Nikolov:
If you refer to 
//.glusterfs///gfid 
- it' s a hard link to the file on the brick.

Directories in the .glusterfs are just symbolic links.

Can you clarify what you are planing to delete ?

Best Regards,
Strahil Nikolov

On Wed, Aug 17, 2022 at 14:35, Ilias Chasapakis forumZFD
 wrote:

Hi Thomas,

Thanks again for your replies and patience :)

We have also offline backups of the files.

So, just to verify I understood this correctly, deletion of a
.glusterfs-gfid file doesn't inherently include the risk of the
loss of the complete brick, right?

I saw you already applied this for your purposes so it worked for
you... But just as a confirmation. Of course it is fully
understood that the operational risk is on our side.

It is just an "information-wise" question :)

Best regards
Ilias

Am 17.08.22 um 12:47 schrieb Thomas Bätzler:


Hello Ilias,

Please note that you can and should backup all of the file(s)
involved in the split-brain by accessing them over the brick root
instead of the gluster mount. That is also the reason why you’re
not in danger of a failure cascade wiping out our data.

Be careful when replacing bricks, though. You want that heal to
go in the right direction 😉

Mit freundlichen Grüßen,

i.A. Thomas Bätzler

-- 


BRINGE Informationstechnik GmbH

Zur Seeplatte 12

D-76228 Karlsruhe

Germany

Fon: +49 721 94246-0

Fon: +49 171 5438457

Fax: +49 721 94246-66

Web: http://www.bringe.de/ 

Geschäftsführer: Dipl.-Ing. (FH) Martin Bringe

Ust.Id: DE812936645, HRB 108943 Mannheim

*Von:* Gluster-users 
 *Im Auftrag von *Ilias
Chasapakis forumZFD
*Gesendet:* Mittwoch, 17. August 2022 11:18
*An:* gluster-users@gluster.org 
*Betreff:* Re: [Gluster-users] Directory in split brain does not
heal - Gfs 9.2

Thanks for the suggestions. My question is if the risk is
actually related to only losing the file/dir or actually creating
inconsistencies that span through the bricks and "break everything".
Of course we have to take action anyway for this not to spread
(as we already now have a second entry that developed an
"unhealable" directory split-brain) so it is just a question of
evaluation before acting.

Am 12.08.22 um 18:12 schrieb Thomas Bätzler:

Am 12.08.2022 um 17:12 schrieb Ilias Chasapakis forumZFD:

Dear fellow gluster users,

we are facing a problem with our replica 3 setup.
Glusterfs version is 9.2.

We have a problem with a directory that is in split-brain
and we cannot manage to heal with:

gluster volume heal gfsVol split-brain latest-mtime
/folder