Hi David,
thaks for the answer, but if I use the IP and one controller fall down
(i.e. controller1) I can see from the client when I make a ll command of
gluster volume just the files present in brick2 "exported" by controller2.
The iscsi can be access from multiple host, but we can not write from 2
hosts (controller1 and controller2) in the same LUN at the same time
(filesystem is not cluster like OCFS or GPFS) but if controller1 is
powered off, the controller2 can write to brick1 attached via iscsi so
the client when I do a ll command on the gluster volume display the
entire volume made of brick1 and brick2.
We would like to use VIP to not reconfigure the volume if one controller
fall down.
I try to do an example.
We have controller1 192.168.1.10 and controller2 192.168.1.11.
If I create the volume like this:
gluster volume create volume1 transport tcp
192.168.1.10:/data/brick1/sda 192.168.1.11:/data/brick2/sdb
If controller1 fall down in all the client we see just the data store in
/data/brick2/sdb and if I want to the the files in brick1 I have to
reconfigure like: (using IP fo controller2 for the 2 bricks)
gluster volume create volume1 transport tcp
192.168.1.11:/data/brick1/sda 192.168.1.11:/data/brick2/sd
If we succeed to our configuration we could have this:
We have controller1 192.168.1.10 with VIP 10.0.1.10 and controller2
192.168.1.11 with VIP 10.0.1.11.
We can create the volume like this:
gluster volume create volume1 transport tcp 10.0.1.10:/data/brick1/sda
10.0.1.11:/data/brick2/sdb
If controller1 fall down the keepalived chage the 10.0.1.10 into
192.168.1.11 and the /data/brick1/sda starts to be manage from
controller2 and the client see the entire volume and all files in brick1
and brick2 without any reconfiguration.
I hope I have clarified what we would like to do.
Chees
Sergio
On 01/12/2015 02:14 PM, David Gibbons wrote:
I use VIPs and keepalived on my production configuration as well. You
don't want to peer probe with the VIP. You want to peer probe with the
actual IP. The VIP is merely a forwarding-facing mechanism for clients
to connect to, and that's why it fails between your gluster peers. The
peers themselves already know how to handle failover in a more
graceful way than a VIP :).
Remove the peers then re-probe with the actual IP instead of the VIP.
The VIP is just for clients.
Cheers,
Dave
On Mon, Jan 12, 2015 at 7:57 AM, Sergio Traldi
<[email protected] <mailto:[email protected]>> wrote:
Hi,
We have a SAN with 14 TB of disks space and we have 2 controllers
attached to this SAN.
We want to use this storage using gluster.
Our goal is to use this storage in high availability, i.e. we want
to keep using all the storage even if there are some problems with
one of the controllers.
Our idea is the following:
- Create 2 LUN
- Attach via iscsi the 2 LUN to each Controller Hosts.
- Create a brick on each controller node (brick1 for Controller1
and brick2 for Controller2)
- Make the login so each controller are able to mount disk1 to
brick1 and disk2 to brick2.
- Install keepalived (a routing software where its main goal is to
provide simple and robust facilities for loadbalancing and
high-availability to Linux).
- Create 2 VIP (Virtual IP) one for controller 1 and the other for
controller 2. So the situation would be:
o Controller1 with his IP (IP1) would have also a VIP (VIP1)
with 2 iscsi disks mounted but just one in R/W mode used (brick1).
o Controller2 with his IP (IP2)and a VIP (VIP2) with 2 iscsi
disksmounted but just one in R/W mode used (brick2).
- The glusterfs volume would be mounted on the client in
fail-over, i.e. in the fstab there would be something like:
VIP1:/volume /var/lib/nova/instances glusterfs defaults,log-le
vel=ERROR,_netdev,backup-volfile-servers=VIP2 0 0
- Keepalived would be configured to change VIP1 to IP2 if
controller1 e.g. has to be shutdown. The same for VIP2.
This VIP change should hopefully not impact the operations on the
client
We are trying this setting but when we try to create a volume:
gluster volume create testvolume transport tcp
VIP1:/data/brick1/sda VIP2:/data/brick2/sdb
we obtain this error:
volume create: testvolume : failed: Host VIP2 is not in 'Peer in
Cluster' state
But if we try :
[controller1]# gluster peer status
Number of Peers: 1
Hostname: VIP2
Uuid: 6692a700-4c41-4e8d-8810-48f9d1ee9315
State: Accepted peer request (Connected)
[controller2]# gluster peer status
Number of Peers: 1
Hostname: IP1
Uuid: 074e9eea-6bf5-4ac8-8ac9-d1159bb4d452
State: Accepted peer request (Disconnected)
If we try to:
[controller2]# gluster peer probe VIP1
we obtain this error:
peer probe: failed: Probe returned with unknown errno 107
Any idea how I can not create a volume with two virtual IP?
Thinking it could be a DNS problem I try also to put in /etc/hosts
this lines:
VIP1 controller1.mydomain controller1
VIP2 controller2.mydomain controller2
In each controller.
In the log file of controller2 I just found:
[2015-01-12 11:42:47.549545] E
[glusterd-handshake.c:1644:__glusterd_mgmt_hndsk_version_cbk]
0-management: failed to get the 'versions' from peer (IP1:24007)
In the log file of cotnroller1 I just found:
[2015-01-12 11:44:44.229600] E
[glusterd-handshake.c:914:gd_validate_mgmt_hndsk_req]
0-management: Rejecting management handshake request from unknown
peer IP2:1018
[2015-01-12 11:44:47.234863] E
[glusterd-handshake.c:914:gd_validate_mgmt_hndsk_req]
0-management: Rejecting management handshake request from unknown
peer IP2:1017
[2015-01-12 11:44:50.240324] E
[glusterd-handshake.c:914:gd_validate_mgmt_hndsk_req]
0-management: Rejecting management handshake request from unknown
peer IP2:1001
If I try a telnet:
[controller2]# telnet VIP1 24007
and
[controller1]# telnet VIP2 24007
they work fine.
Any idea if it is possible create a volume using VIPs and not IPs?
Cheers
Sergio
_______________________________________________
Gluster-users mailing list
[email protected] <mailto:[email protected]>
http://www.gluster.org/mailman/listinfo/gluster-users
_______________________________________________
Gluster-users mailing list
[email protected]
http://www.gluster.org/mailman/listinfo/gluster-users