Re: [Linux-cluster] 3 node cluster and quorum disk?

2009-09-10 Thread Alex Kompel
On Tue, Sep 1, 2009 at 12:19 PM, Lon Hohberger l...@redhat.com wrote:

 On Wed, 2009-08-26 at 16:11 +0200, Jakov Sosic wrote:
  Hi.
 
  I have a situation - when two nodes are up in 3 node cluster, and one
  node goes down, cluster looses quorate - although I'm using qdiskd...


!-- Token --
totem token=55000/
 
!-- Quorum Disk --
quorumd interval=5 tko=5 votes=2
label=SAS-qdisk status_file=/tmp/qdisk/

 cman quorum_dev_poll=55/

 If that doesn't fix it entirely, get rid of status_file, decrease
 interval, and increase tko.  Try:

 interval=2 tko=12 ?


I had to do some code diving to figure out cluster timeouts. Is this correct
assumption?

qdisk.tko_up*qdisk.interval  qdisk.wait_master*qdisk.interval 
cman.quorum_dev_poll/1000 + qdisk.interval  qdisk.tko*qdisk.interval 
totem.token/1000

-Alex
--
Linux-cluster mailing list
Linux-cluster@redhat.com
https://www.redhat.com/mailman/listinfo/linux-cluster

Re: [Linux-cluster] Quorum Disk and I/O MultiPath problem

2009-04-22 Thread Alex Kompel
It appears that it took 20 sec for path to fail over. quorumd tko is 10 sec
by default. You may want to reduce HBA timeout or tweak tko for quorumd.
Basically you want to set all cluster timeouts to exceed expected failover
time of lower-level systems.
-Alex

On Wed, Apr 22, 2009 at 4:31 PM, Flavio Junior bil...@gmail.com wrote:

 Hi folks,

 I'm trying to configure a 2-node cluster using quorum disk as tie-breaker.

 I'm getting a problem when my active I/O path for quorum disk goes
 down (I'm testing turn off one (of two) SAN fiber switches), so one
 node is being fenced.
 I believe this is not right or can have a better way to, so I'll post
 my configs here and wait for comments :).

 # cluster.conf, cman status/services/nodes -f
 http://rafb.net/p/J4D5UD76.html

 # Log from messages when one switch is turned off
 http://rafb.net/p/SA8Y0A83.html

 Any help, sugest or comment is appreciate :).

 Thanks.

 --

 Flávio do Carmo Júnior aka waKKu

 --
 Linux-cluster mailing list
 Linux-cluster@redhat.com
 https://www.redhat.com/mailman/listinfo/linux-cluster

--
Linux-cluster mailing list
Linux-cluster@redhat.com
https://www.redhat.com/mailman/listinfo/linux-cluster

Re: [Linux-cluster] GFS, iSCSI, multipaths and RAID

2008-05-28 Thread Alex Kompel
On Wed, May 28, 2008 at 2:37 PM, Ross Vandegrift [EMAIL PROTECTED] wrote:
 On Wed, May 28, 2008 at 02:18:39PM -0700, Alex Kompel wrote:
 I would not use multipath I/O with iSCSI unless you have specific
 reasons for doing so. iSCSI is only as highly-available as you network
 infrastructure allows it to be. If you have a full failover within the
 network then you don't need multipath. That simplifies configuration a
 lot. Provided your network core is fully redundant (both link and
 routing layers), you can connect 2 NICs on each server to separate
 switches and bond them (google for channel bonding). Once you have
 redundant network connection you can use the setup from the article I
 posted earlier. This will give you iSCSI endpoint failover.

 This depends on a lot of things.  In all of the iSCSI storage systems
 I'm familiar with, the same target is provided redundantly via
 different portal IPs.  This provides failover in the case of an iscsi
 controller failing on the storage system.  The network can be as
 redundant as you like, but without multipath, you won't survive a
 portal failure.

In this case the portal failure is handled by host failover mechanisms
(heartbeat, RedHat cluster, etc) and connection failure is handled by
the network layer. Sometimes you have to use multipath (for example,
if there is no way to do transparent failover on storage controllers)
but it adds extra complexity on the initiator side so if there is a
way to avoid it why not do it?

 If you bond between two different switches, you'll only be able to do
 failover between the NICs.  If you use multipath, you can round-robin
 between them to provide a greater bandwidth overhead.

Same goes for bonding: link aggregation with active-active bonding.

-Alex

--
Linux-cluster mailing list
Linux-cluster@redhat.com
https://www.redhat.com/mailman/listinfo/linux-cluster


Re: [Linux-cluster] GFS, iSCSI, multipaths and RAID

2008-05-20 Thread Alex Kompel
On Mon, May 19, 2008 at 2:15 PM, Michael O'Sullivan
[EMAIL PROTECTED] wrote:
 Thanks for your response Wendy. Please see a diagram of the system at
 http://www.ndsg.net.nz/ndsg_cluster.jpg/view (or
 http://www.ndsg.net.nz/ndsg_cluster.jpg/image_view_fullscreen for the
 fullscreen view) that (I hope) explains the setup. We are not using FC as we
 are building the SAN with commodity components (the total cost of the system
 was less than NZ $9000). The SAN is designed to hold files for staff and
 students in our department, I'm not sure exactly what applications will use
 the GFS. We are using iscsi-target software although we may upgrade to using
 firmware in the future. We have used CLVM on top of software RAID, I agree
 there are many levels to this system, but I couldn't find the necessary is
 hardware/software to implement this in a simpler way. I am hoping the list
 may be helpful here.


So what do you want to get out of this configuration? iSCSI SAN, GFS
cluster or both? I don't see any reason for 2 additional servers
running GFS on top of iSCSI SAN.

If you need iSCSI SAN with iscsi-target then there are number of
articles on how to set it up. For example:
http://www.pcpro.co.uk/realworld/82284/san-on-the-cheap/page1.html Or
just google for iscsi-target drdb and heartbeat.

If you need GFS then you can run it on the storage servers (there is
no need for iSCSI in between).

If you need both then it can get tricky but you can try splitting your
raid arrays in a way that half is used by GFS cluster and half is for
DRDB volumes with iSCSI luns on top and RedHat Cluster acting as a
heartbeat for failover (provided you can also do regular failover with
GFS running on the same cluster - I have never tried it before).

-Alex

--
Linux-cluster mailing list
Linux-cluster@redhat.com
https://www.redhat.com/mailman/listinfo/linux-cluster


Re: [Linux-cluster] Severe problems with 64-bit RHCS on RHEL5.1

2008-04-17 Thread Alex Kompel
2008/4/17 Harri Päiväniemi [EMAIL PROTECTED]:

 The 2nd problem that still exists is:

 When node a and b are running and everything is ok. I stop node b's
 cluster daemons. when I start node b again, this situation stays
 forever:

 
 node a - clustat
 Member Status: Quorate

  Member NameID   Status
  --  --
  areenasql11 Online, Local, rgmanager
  areenasql22 Offline
  /dev/sda  0 Online, Quorum Disk

  Service Name Owner (Last)   State
  ---  - --   -
  service:areena   areenasql1 started

 ---

 node b - clustat

 Member Status: Quorate

  Member NameID   Status
  --  --
  areenasql11 Online, rgmanager
  areenasql22 Online, Local, rgmanager
  /dev/sda  0 Offline, Quorum Disk

  Service Name Owner (Last)   State
  ---  - --   -
  service:areena   areenasql1 started


 So node b's quorum disk is offline, log says it's registred ok and
 heuristic is UP... node a sees node b as offline. If I reboot node b, it
 works ok and joins ok...

Now that you have mentioned it - I remember stumbling upon the similar
problem. It happens if you restart the cluster services before the
cluster realizes the node is dead. I guess it is a bug since the node
is in some sort of limbo state at that moment reporting itsefl being
part of the cluster while the cluster does not recognize it as a
member. If you wait 70 seconds ( cluster.conf: totem token=7/
) before starting the cluster services then it will come up fine. The
reboot works for you because it take longer than 70 sec (correct me if
I am wrong). So try stopping node b cluster services, wait 70 secs and
then start them back up.

-Alex

--
Linux-cluster mailing list
Linux-cluster@redhat.com
https://www.redhat.com/mailman/listinfo/linux-cluster


Re: [Linux-cluster] Is there a fencing agent I can use for iscsi ?(GFS and iSCSI)

2008-04-04 Thread Alex Kompel
2008/4/4 Maciej Bogucki [EMAIL PROTECTED]:

 jr napisał(a):
  I was wondering if anyone has written a iscsi fencing agent that I
 could use. I saw one written in perl that ssh'd into the node and added an
 iptables entry in order to fence the server from the iscsi target. It was
 from 2004 and didn't run correctly on my machine.  Does anyone have any
 ideas? Or should I try and salvage the one I found and fix it up? Thanks.
 
  if you need to use it (as suggested in that other reply), i'd make sure
  it doesn't connect to a node but to the iSCSI target and adds the
  firewall rules there :) or even better if you have a managed switch in
  between where you can simply disable the ethernet port (or even better,
  have iSCSI on a separate vlan and remove the port from that vlan) via an
  ssh script or maybe snmp or whatever.
  enjoy,

 Another option is fencing via power device fe. fence_apc, fence_apc_snmp
 but You would need tu but APC hardware. Fenceing via fence_ilo,
 fence_rsa. fence_ipmilan is the option if You would have IBM, Dell or HP
 servers. You could also try fence_scsi without any costs, but it doesn't
 works if You had multipath configuration.

I second that: fence_scsi should work pretty well if your target supports
SCSI-3 persistent reservations. It does not make much sense to use multipath
I/O  for iSCSI since channel bonding provides the same functionality
nowadays.

Also, if you have 2-node cluster then you can configure quorum disk on iSCSI
volume as a tiebreaker .

-Alex
--
Linux-cluster mailing list
Linux-cluster@redhat.com
https://www.redhat.com/mailman/listinfo/linux-cluster

Re: [Linux-cluster] SCSI Reservations Red Hat Cluster Suite

2008-03-28 Thread Alex Kompel
On Fri, Mar 28, 2008 at 8:15 AM, Ryan O'Hara [EMAIL PROTECTED] wrote:

 The reason for the cluster LVM2 requirement is for device discovery. The
 scripts use LVM commands to find cluster volumes and then gets a list of
 devices that make up those volumes. Consider the alternative -- users
 would have to manually define a list of devices that need
 registrations/reservations. This would have to be defined on each node.
 What make this even more problematic is that each node may have
 different device names for shared storage devices (ie. what may be
 /deb/sdb on one node may be /deb/sdc on another). Furthermore, those
 device names could change between reboots. The general solution is to
 query clvmd for a list of cluster volumes and get a list of devices for
 those volumes.

You can also use symbolic links under /dev/disk/by-id/ which are
persistent across nodes/reboots.

-Alex

--
Linux-cluster mailing list
Linux-cluster@redhat.com
https://www.redhat.com/mailman/listinfo/linux-cluster


Re: [Linux-cluster] Two node NFS cluster serving multiple networks

2008-03-13 Thread Alex Kompel
Google linux policy based routing.

In your example you just need to setup different gateways for both
interfaces. For example:
ip route add default via 69.2.237.57 dev eth0 tab 1
ip route add default via 192.168.1.1 dev eth1 tab 2


On Thu, Mar 13, 2008 at 9:23 AM, [EMAIL PROTECTED]
[EMAIL PROTECTED] wrote:
 Is there a good document somewhere which explains in not too great technical
 terms how to use multiple nics on a system. I've been running bonded nics for
 many years but getting a machine to use two (or more networks) is still a
 mystery to me.

 For example, I have a VoIP machine which has two nics which I have problems
 with because I don't understand the above yet.

 This machine has a nic allows incoming VoIP/ZIP connections to it's public IP
 address on a T1. The router blocks everything but that traffic.

 Then it has a second nic which has a private IP on it to allow for management
 of the machine. Yet recently, it lost it's DNS, it can't seem to get access to
 DNS on it's own. I can force it to use DNS by typing ping commands a couple of
 times but it cannot do it on it's own to get it's updates for example.

 Basically, I need the machine to see it's public gateway at xx.x.237.59 to
 route it's VoIP/SIP traffic but I also need it to see it's private gateway at
 192.168.1.0 so that it can use DNS and other internal services properly.

 route -n
 Kernel IP routing table
 Destination   Gateway  GenmaskFlags Metric RefUse Iface
 xx.x.237.56   0.0.0.0255.255.255.248 U 0  00 eth0
 192.168.1.0  0.0.0.0255.255.255.0U 0  00 eth1
 169.254.0.0  0.0.0.0255.255.0.0U 0  00 eth1
 0.0.0.0 69.2.237.57   0.0.0.0 UG0  00 eth0

 ifconfig
 eth0  Link encap:Ethernet  HWaddr 00:90:27:DC:4B:E6
  inet addr:xx.x.237.59  Bcast:69.2.237.63  Mask:255.255.255.248
  inet6 addr: fe80::290:27ff:fedc:4be6/64 Scope:Link
  UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
  RX packets:33910280 errors:16 dropped:0 overruns:0 frame:16
  TX packets:45988648 errors:0 dropped:0 overruns:0 carrier:0
  collisions:24746 txqueuelen:1000
  RX bytes:681966199 (650.3 MiB)  TX bytes:1657358619 (1.5 GiB)

 eth1  Link encap:Ethernet  HWaddr 00:13:20:55:D7:CE
  inet addr:192.168.1.102  Bcast:192.168.1.255  Mask:255.255.255.0
  inet6 addr: fe80::213:20ff:fe55:d7ce/64 Scope:Link
  UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
  RX packets:87417784 errors:0 dropped:0 overruns:0 frame:0
  TX packets:70881957 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:1000
  RX bytes:4171601084 (3.8 GiB)  TX bytes:1547562481 (1.4 GiB)

 loLink encap:Local Loopback
  inet addr:127.0.0.1  Mask:255.0.0.0
  inet6 addr: ::1/128 Scope:Host
  UP LOOPBACK RUNNING  MTU:16436  Metric:1
  RX packets:6501004 errors:0 dropped:0 overruns:0 frame:0
  TX packets:6501004 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:0
  RX bytes:897257336 (855.6 MiB)  TX bytes:897257336 (855.6 MiB)


 Mike




 On Wed, 12 Mar 2008 10:39:50 -0700, Alex Kompel wrote:
  You will still need some way to tell the system through which
 
  interface you want to route outgoing packets for each target.
  You can achieve the same with greater ease by splitting the network in
  2 subnets and assigning each to a single interface.
  It all depends on the problem you are trying to solve. If you want
  redundancy - use active-passive bonding, you want throughput - use
  active-active bonding (if your switch supports link aggregation), if
  you want security and isolation - use separate subnets.
 
  -Alex
 
  2008/3/12 Brian Kroth [EMAIL PROTECTED]:
  This is a hypothetical, but what if you have two interfaces on the same
  network and want to force one service IP to one interface and the other
  to a different interface?  I think what everyone is wondering is how
  much control one has over the service IP placement.
 
  Thanks,
  Brian
 
  Finnur Örn Guðmundsson - TM Software [EMAIL PROTECTED] 2008-03-12 14:36:
 
 
  Hi,
 
  I see no reason why you could not have 3 diffrent interfaces, each
  connected to the networks you are trying to serve the NFS requests
  to/from. RG Manager will add the floating interfaces to the correct
  interface, that is, if your floating ip is 1.2.3.4 and you have a
  interface with the IP address 1.2.3.3 he will add the IP to that
  interface.
 
 
  Bgrds,
  Finnur
 
  -Original Message-
  From: [EMAIL PROTECTED] [mailto:linux-cluster-
  [EMAIL PROTECTED] On Behalf Of [EMAIL PROTECTED]
  Sent: 12. mars 2008 14:10
  To: linux clustering
  Subject: Re: [Linux-cluster] Two node NFS cluster serving multiple
  networks
 
  Sounds very similar to what I'm trying to achieve (see the other thread
  about binding

Re: [Linux-cluster] Two node NFS cluster serving multiple networks

2008-03-13 Thread Alex Kompel
Actually, I take it back in your example I guess you can add a static
route to the network where DNS servers are and that should do it.

PS: You can have multiple routing tables which are selected base on
the rules (which I forgot to mention):
http://lartc.org/howto/lartc.rpdb.html

On Thu, Mar 13, 2008 at 1:57 PM, [EMAIL PROTECTED]
[EMAIL PROTECTED] wrote:
 Guess I forgot to edit those IP's :).

 I thought you could only have one default gateway on a machine.
 I've never needed to deal with multiple nics other than bonded.

 PS: What does tab 1/2 mean?

 Mike



 On Thu, 13 Mar 2008 13:39:25 -0700, Alex Kompel wrote:
  Google linux policy based routing.
 
  In your example you just need to setup different gateways for both
  interfaces. For example:
  ip route add default via 69.2.237.57 dev eth0 tab 1
  ip route add default via 192.168.1.1 dev eth1 tab 2
 
 
  On Thu, Mar 13, 2008 at 9:23 AM, [EMAIL PROTECTED]
  [EMAIL PROTECTED] wrote:
  Is there a good document somewhere which explains in not too great
  technical
  terms how to use multiple nics on a system. I've been running bonded nics
  for
  many years but getting a machine to use two (or more networks) is still a
  mystery to me.
 
  For example, I have a VoIP machine which has two nics which I have
  problems
  with because I don't understand the above yet.
 
  This machine has a nic allows incoming VoIP/ZIP connections to it's
  public IP
  address on a T1. The router blocks everything but that traffic.
 
  Then it has a second nic which has a private IP on it to allow for
  management
  of the machine. Yet recently, it lost it's DNS, it can't seem to get
  access to
  DNS on it's own. I can force it to use DNS by typing ping commands a
  couple of
  times but it cannot do it on it's own to get it's updates for example.
 
  Basically, I need the machine to see it's public gateway at xx.x.237.59 to
  route it's VoIP/SIP traffic but I also need it to see it's private
  gateway at
  192.168.1.0 so that it can use DNS and other internal services properly.
 
  route -n
  Kernel IP routing table
  Destination   Gateway  GenmaskFlags Metric RefUse
  Iface
  xx.x.237.56   0.0.0.0255.255.255.248 U 0  00 eth0
  192.168.1.0  0.0.0.0255.255.255.0U 0  00 eth1
  169.254.0.0  0.0.0.0255.255.0.0U 0  00
  eth1
  0.0.0.0 69.2.237.57   0.0.0.0 UG0  00
  eth0
 
  ifconfig
  eth0  Link encap:Ethernet  HWaddr 00:90:27:DC:4B:E6
  inet addr:xx.x.237.59  Bcast:69.2.237.63  Mask:255.255.255.248
  inet6 addr: fe80::290:27ff:fedc:4be6/64 Scope:Link
  UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
  RX packets:33910280 errors:16 dropped:0 overruns:0 frame:16
  TX packets:45988648 errors:0 dropped:0 overruns:0 carrier:0
  collisions:24746 txqueuelen:1000
  RX bytes:681966199 (650.3 MiB)  TX bytes:1657358619 (1.5 GiB)
 
  eth1  Link encap:Ethernet  HWaddr 00:13:20:55:D7:CE
  inet addr:192.168.1.102  Bcast:192.168.1.255  Mask:255.255.255.0
  inet6 addr: fe80::213:20ff:fe55:d7ce/64 Scope:Link
  UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
  RX packets:87417784 errors:0 dropped:0 overruns:0 frame:0
  TX packets:70881957 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:1000
  RX bytes:4171601084 (3.8 GiB)  TX bytes:1547562481 (1.4 GiB)
 
  loLink encap:Local Loopback
  inet addr:127.0.0.1  Mask:255.0.0.0
  inet6 addr: ::1/128 Scope:Host
  UP LOOPBACK RUNNING  MTU:16436  Metric:1
  RX packets:6501004 errors:0 dropped:0 overruns:0 frame:0
  TX packets:6501004 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:0
  RX bytes:897257336 (855.6 MiB)  TX bytes:897257336 (855.6 MiB)
 
 
  Mike
 
 
  On Wed, 12 Mar 2008 10:39:50 -0700, Alex Kompel wrote:
  You will still need some way to tell the system through which
 
  interface you want to route outgoing packets for each target.
  You can achieve the same with greater ease by splitting the network in
  2 subnets and assigning each to a single interface.
  It all depends on the problem you are trying to solve. If you want
  redundancy - use active-passive bonding, you want throughput - use
  active-active bonding (if your switch supports link aggregation), if
  you want security and isolation - use separate subnets.
 
  -Alex
 
  2008/3/12 Brian Kroth [EMAIL PROTECTED]:
  This is a hypothetical, but what if you have two interfaces on the
  same
  network and want to force one service IP to one interface and the
  other
  to a different interface?  I think what everyone is wondering is how
  much control one has over the service IP placement.
 
  Thanks,
  Brian
 
  Finnur Örn Guðmundsson - TM Software [EMAIL PROTECTED] 2008-03-12 
  14:36:
 
 
  Hi,
 
  I see no reason why you could not have 3 diffrent interfaces, each
  connected to the networks you are trying to serve the NFS requests
  to/from. RG Manager will add the floating interfaces

Re: [Linux-cluster] Two node NFS cluster serving multiple networks

2008-03-13 Thread Alex Kompel
Well, that depends where his DNS servers are. If they are on, for
example, 192.168.2 then DNS traffic is routed through the public
interface.

2008/3/13 Bennie Thomas [EMAIL PROTECTED]:
 I never use multiple routes. can cause you some grief. Make sure your
 /etc/hosts, /etc/resolv.conf, /etc/nsswitch.conf files.
 I use multiple networks currently and have no problems with the traffic
 going out the correct paths

 B



 [EMAIL PROTECTED] wrote:
 Guess I forgot to edit those IP's :).

 I thought you could only have one
 default gateway on a machine.
 I've never needed to deal with multiple nics
 other than bonded.

 PS: What does tab 1/2 mean?

 Mike


 On Thu, 13 Mar 2008
 13:39:25 -0700, Alex Kompel wrote:

 Google linux policy based routing.

 In your example you just need to setup
 different gateways for both
 interfaces. For example:
 ip route add default
 via 69.2.237.57 dev eth0 tab 1
 ip route add default via 192.168.1.1 dev eth1
 tab 2


 On Thu, Mar 13, 2008 at 9:23 AM,
 [EMAIL PROTECTED]
 [EMAIL PROTECTED] wrote:

 Is there a good document somewhere which explains in not too
 great
 technical
 terms how to use multiple nics on a system. I've been
 running bonded nics
 for
 many years but getting a machine to use two (or more
 networks) is still a
 mystery to me.

 For example, I have a VoIP machine
 which has two nics which I have
 problems
 with because I don't understand the
 above yet.

 This machine has a nic allows incoming VoIP/ZIP connections to
 it's
 public IP
 address on a T1. The router blocks everything but that
 traffic.

 Then it has a second nic which has a private IP on it to allow
 for
 management
 of the machine. Yet recently, it lost it's DNS, it can't seem
 to get
 access to
 DNS on it's own. I can force it to use DNS by typing ping
 commands a
 couple of
 times but it cannot do it on it's own to get it's
 updates for example.

 Basically, I need the machine to see it's public
 gateway at xx.x.237.59 to
 route it's VoIP/SIP traffic but I also need it to
 see it's private
 gateway at
 192.168.1.0 so that it can use DNS and other
 internal services properly.

 route -n
 Kernel IP routing table
 Destination
 Gateway Genmask Flags Metric Ref Use
 Iface
 xx.x.237.56 0.0.0.0
 255.255.255.248 U 0 0 0 eth0
 192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0
 eth1
 169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0
 eth1
 0.0.0.0 69.2.237.57
 0.0.0.0 UG 0 0 0
 eth0

 ifconfig
 eth0 Link encap:Ethernet HWaddr
 00:90:27:DC:4B:E6
 inet addr:xx.x.237.59 Bcast:69.2.237.63
 Mask:255.255.255.248
 inet6 addr: fe80::290:27ff:fedc:4be6/64 Scope:Link
 UP
 BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
 RX packets:33910280 errors:16
 dropped:0 overruns:0 frame:16
 TX packets:45988648 errors:0 dropped:0
 overruns:0 carrier:0
 collisions:24746 txqueuelen:1000
 RX bytes:681966199
 (650.3 MiB) TX bytes:1657358619 (1.5 GiB)

 eth1 Link encap:Ethernet HWaddr
 00:13:20:55:D7:CE
 inet addr:192.168.1.102 Bcast:192.168.1.255
 Mask:255.255.255.0
 inet6 addr: fe80::213:20ff:fe55:d7ce/64 Scope:Link
 UP
 BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
 RX packets:87417784 errors:0
 dropped:0 overruns:0 frame:0
 TX packets:70881957 errors:0 dropped:0
 overruns:0 carrier:0
 collisions:0 txqueuelen:1000
 RX bytes:4171601084 (3.8
 GiB) TX bytes:1547562481 (1.4 GiB)

 lo Link encap:Local Loopback
 inet
 addr:127.0.0.1 Mask:255.0.0.0
 inet6 addr: ::1/128 Scope:Host
 UP LOOPBACK
 RUNNING MTU:16436 Metric:1
 RX packets:6501004 errors:0 dropped:0 overruns:0
 frame:0
 TX packets:6501004 errors:0 dropped:0 overruns:0
 carrier:0
 collisions:0 txqueuelen:0
 RX bytes:897257336 (855.6 MiB) TX
 bytes:897257336 (855.6 MiB)


 Mike


 On Wed, 12 Mar 2008 10:39:50 -0700,
 Alex Kompel wrote:

 You will still need some way to tell the system through which

 interface you
 want to route outgoing packets for each target.
 You can achieve the same
 with greater ease by splitting the network in
 2 subnets and assigning each
 to a single interface.
 It all depends on the problem you are trying to
 solve. If you want
 redundancy - use active-passive bonding, you want
 throughput - use
 active-active bonding (if your switch supports link
 aggregation), if
 you want security and isolation - use separate
 subnets.

 -Alex

 2008/3/12 Brian Kroth [EMAIL PROTECTED]:

 This is a hypothetical, but what if you have two interfaces on
 the
 same
 network and want to force one service IP to one interface and
 the
 other
 to a different interface? I think what everyone is wondering is
 how
 much control one has over the service IP
 placement.

 Thanks,
 Brian

 Finnur Örn Guðmundsson - TM Software [EMAIL PROTECTED]
 2008-03-12 14:36:



 Hi,

 I see no reason why you could not have 3 diffrent interfaces,
 each
 connected to the networks you are trying to serve the NFS
 requests
 to/from. RG Manager will add the floating interfaces to
 the
 correct
 interface, that is, if your floating ip is 1.2.3.4 and you
 have a
 interface with the IP address 1.2.3.3 he will add the IP

Re: [Linux-cluster] Two node NFS cluster serving multiple networks

2008-03-12 Thread Alex Kompel
You will still need some way to tell the system through which
interface you want to route outgoing packets for each target.
You can achieve the same with greater ease by splitting the network in
 2 subnets and assigning each to a single interface.
It all depends on the problem you are trying to solve. If you want
redundancy - use active-passive bonding, you want throughput - use
active-active bonding (if your switch supports link aggregation), if
you want security and isolation - use separate subnets.

-Alex

2008/3/12 Brian Kroth [EMAIL PROTECTED]:
 This is a hypothetical, but what if you have two interfaces on the same
  network and want to force one service IP to one interface and the other
  to a different interface?  I think what everyone is wondering is how
  much control one has over the service IP placement.

  Thanks,
  Brian

  Finnur Örn Guðmundsson - TM Software [EMAIL PROTECTED] 2008-03-12 14:36:


  Hi,
  
   I see no reason why you could not have 3 diffrent interfaces, each 
 connected to the networks you are trying to serve the NFS requests to/from. 
 RG Manager will add the floating interfaces to the correct interface, that 
 is, if your floating ip is 1.2.3.4 and you have a interface with the IP 
 address 1.2.3.3 he will add the IP to that interface.
  
  
   Bgrds,
   Finnur
  
   -Original Message-
   From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of [EMAIL 
 PROTECTED]
   Sent: 12. mars 2008 14:10
   To: linux clustering
   Subject: Re: [Linux-cluster] Two node NFS cluster serving multiple networks
  
   Sounds very similar to what I'm trying to achieve (see the other thread
   about binding failover resources to interfaces). I've not seen a response
   yet, so I'm most curious to see if you'll get any.
  
   Gordan
  
   On Wed, 12 Mar 2008, Randy Brown wrote:
  
I am using a two node cluster with Centos 5 with up to date patches.  We 
 have
three different networks to which I would like to serve nfs mounts from 
 this
cluster.  Can this even be done?  I have interfaces available for each
network in each node?
  
   --
   Linux-cluster mailing list
   Linux-cluster@redhat.com
   https://www.redhat.com/mailman/listinfo/linux-cluster
  
   --
   Linux-cluster mailing list
   Linux-cluster@redhat.com
   https://www.redhat.com/mailman/listinfo/linux-cluster

 --
  Linux-cluster mailing list
  Linux-cluster@redhat.com
  https://www.redhat.com/mailman/listinfo/linux-cluster


--
Linux-cluster mailing list
Linux-cluster@redhat.com
https://www.redhat.com/mailman/listinfo/linux-cluster


Re: [Linux-cluster] Clustering Oracle 10g

2008-02-27 Thread Alex Kompel
On Wed, Feb 27, 2008 at 6:45 AM, Stephen Nelson-Smith
[EMAIL PROTECTED] wrote:
 Hi,

 I've got a client with a single instance of 10g running on Windows Server 
 2003.

 They've approached me with a view to migrating to Linux, and
 increasing its availability.

 What's the current ruling from Oracle about multple licences?  If I
 just had an active-passive system, do I still have to fork out five
 figure numbers for the second, idle Oracle instance?




http://www.oracle.com/corporate/pricing/sig.pdf


Failover – In this type of recovery, nodes are configured in
cluster; the first
installed node acts as a primary node. If the primary node fails, one of the
nodes in the cluster acts as the primary node. In this type of environment,
Oracle permits licensed Oracle Database customers to run the Database on
an unlicensed spare computer for up to a total of ten separate days in any
given calendar year. Any other use requires the environment to be fully
licensed. Same rule applies for Internet Application Server. Additionally, the
same metric must be used when licensing the databases in a failover
environment. See illustration #4.


--
Linux-cluster mailing list
Linux-cluster@redhat.com
https://www.redhat.com/mailman/listinfo/linux-cluster


Re: [Linux-cluster] Any HA Cluster Success with iSCSI storage?

2008-01-25 Thread Alex Kompel
On Jan 25, 2008 6:51 AM, Ben Russo [EMAIL PROTECTED] wrote:

 I currently have a RHEL-3 HA Cluster in a different City using fiber
 channel SCSI storage.  It has worked fine.

 I want to setup another cluster, this time with RHEL-4.

 I already have a NetApp FAS270c (for NFS and CIFS NAS).
 It supports iSCSI.


 ***  Can I setup my two node HA cluster with iSCSI quorum drives and
 cluster service storage volumes? (anyone do this before?)


I am going to play devils advocate here: is there a specific reason you want
to use GFS in this setup? NetApp has excellent NFS and CIFS support and it
looks like you already paid for both and HA option for NetApp (270c is a
clustered filer).



 ***  I was thinking about getting 10Gbit/sec uplinks for the NetApp and
 two ethernet switches that have 10Gbit uplink ports and that support
 802.3ad.  The two cluster nodes would use 802.3ad NIC channel bonding
 for the storage access bandwidth.  (anyone do this before?)

Do not get 10gbE. First, I don't think you can get 10gbE interfaces in
FAS270.
Second, FAS270 won't be able to saturate even 1gb link. The bottleneck is
usually the filer CPU.
It does support link aggregation but you won't see much of the difference vs
active/passive bonding.

-Alex
--
Linux-cluster mailing list
Linux-cluster@redhat.com
https://www.redhat.com/mailman/listinfo/linux-cluster

Re: [Linux-cluster] nanny segfault problem

2007-11-13 Thread Alex Kompel
On 11/13/07, Christopher Barry [EMAIL PROTECTED] wrote:

  Greetings All,

 running RHEL4U5

 I have a bunch of services on my cluster w/ access via redundant
 directors.

 I've created a generic service checking script, which I'm specifying in
 lvs.cf's 'send_program' config parameter.

 script is attached to this post. see that for how it works with the
 symlinks described below.

 I create symlinks to the script for every service I want to check, with
 their name containing the port to hit, as in:
 /sbin/lvs-port.sh

 so the symlink name to check ssh availability, for instance, is:
 /sbin/lvs-22.sh

 The script works fine, and returns the first contiguous block of
 [[:alnum:]] text data from the connection attempt for use with the
 expect line of lvs.cf.


 The problem is, when nanny is spawned by pulse, all of the nanny
 processes segfault.

  Nov 13 14:40:44 kop-sds-dir-01 lvs[17740]: create_monitor for
 ssh_access/kop-sds-01 running as pid 17749
  Nov 13 14:40:44 kop-sds-dir-01 nanny[17749]: making 10.32.12.11:22available
  Nov 13 14:40:44 kop-sds-dir-01 kernel: nanny[17749]: segfault at
 006c rip 00335e570810 rsp 007fbfffe978 error 4

 this occurs almost instantly for every nanny process.

 Can anyone venture a guess as to what is happening?


Try running nanny manually in foreground - see if you get any error
messages. RHEL5 nanny (0.8.4) has a bug where it segfaults on printing
syslog log messages longer than 80 characters. Could be that. The patch is
below.

*** util.c  2002-04-25 21:19:57.0 -0700
--- util.new2007-10-10 13:27:43.0 -0700
***
*** 49,55 

while (1)
  {
!   ret = vsnprintf (buf, bufLen, format, args);
if ((ret  -1)  (ret  bufLen))
{
  break;
--- 49,58 

while (1)
  {
!   va_list try_args;
!   va_copy(try_args, args);
!   ret = vsnprintf (buf, bufLen, format, try_args);
!   va_end(try_args);
if ((ret  -1)  (ret  bufLen))
{
  break;
--
Linux-cluster mailing list
Linux-cluster@redhat.com
https://www.redhat.com/mailman/listinfo/linux-cluster