Re: [Linux-cluster] About GFS1 and I/O barriers.

2008-03-31 Thread Steven Whitehouse
Hi,

Both GFS1 and GFS2 are safe from this problem since neither of them use
barriers. Instead we do a flush at the critical points to ensure that
all data is on disk before proceeding with the next stage.

Using barriers can improve performance in certain cases, but we've not
yet implemented them in GFS2,

Steve.

On Mon, 2008-03-31 at 12:46 +0200, Mathieu Avila wrote:
 Hello all again,
 
 More information on this topic:
 http://lkml.org/lkml/2007/5/25/71
 
 I guess the problem also applies to GFSS2.
 
 --
 Mathieu
 
 Le Fri, 28 Mar 2008 15:34:58 +0100,
 Mathieu Avila [EMAIL PROTECTED] a écrit :
 
  Hello GFS team,
  
  Some recent kernel developements have brought IO barriers into the
  kernel to prevent corruptions that could happen when blocks are being
  reordered before write, by the kernel or the block device itself, just
  before an electrical power failure.
  (on high-end block devices with UPS or NVRAM, those problems cannot
  happen)
  Some file systems implement them, notably ext3 and XFS. It seems to me
  that GFS1 has no such thing.
  
  Do you plan to implement it ? If so, could the attached patch do the
  work ? It's incomplete : it would need a global tuning like
  fast_stafs, and a mount option like it's done for ext3. The code is
  mainly a copy-paste from JBD, and does a barrier only for journal
  meta-data. (should i do it for other meta-data ?)
  
  Thanks,
  
  --
  Mathieu
  
 
 --
 Linux-cluster mailing list
 Linux-cluster@redhat.com
 https://www.redhat.com/mailman/listinfo/linux-cluster

--
Linux-cluster mailing list
Linux-cluster@redhat.com
https://www.redhat.com/mailman/listinfo/linux-cluster


Re: [Linux-cluster] SCSI Reservations Red Hat Cluster Suite

2008-03-31 Thread Maciej Bogucki
Tomasz Sucharzewski napisał(a):
 Hello,
 
 Ryan O'Hara wrote:
 4  - Limitations
 ...
 - Multipath devices are not currently supported.
 
 What is the reason - it is strongly required to use at least two HBA in a SAN 
 network, which is useless when using scsi reservation.
 
Hello,

There is need to use multipath in production environments, and it is the
main drawback of SCSI Reservations for Red Hat Cluster Suite.

Here [1] You can read more about the problems with SCSI Reservations

http://www.mail-archive.com/linux-cluster@redhat.com/msg00524.html


Best Regards
Maciej Bogucki

--
Linux-cluster mailing list
Linux-cluster@redhat.com
https://www.redhat.com/mailman/listinfo/linux-cluster


Re: [Linux-cluster] SCSI Reservations Red Hat Cluster Suite

2008-03-31 Thread Maciej Bogucki
 I believe that this is only a limitation for RHEL4. RHEL5 should have a
 fix that allows dm-multipath to properly pass ioctls to all devices.

Hello,

One problem is registration, but another problem is un-registration fe.
when there is failover from one HBA to another and failback. Third
problem is active-standby LB for two or more HBA, and how to handle this.

Best Regards
Maciej Bogucki

--
Linux-cluster mailing list
Linux-cluster@redhat.com
https://www.redhat.com/mailman/listinfo/linux-cluster


Re: [Linux-cluster] SCSI reservation conflicts after update

2008-03-31 Thread Maciej Bogucki
 From my understanding persistent SCSI reservations are only needed if I
 am using the fence_scsi module.

Yes.

Best Regards
Maciej Bogucki

--
Linux-cluster mailing list
Linux-cluster@redhat.com
https://www.redhat.com/mailman/listinfo/linux-cluster


[Linux-cluster] (newbie) mirrored data / cluster ?

2008-03-31 Thread Daniel Maher
Hello all,

I have spent the day reading through the mailing list archives, Redhat
documentation, and CentOS forums, and - to be frank - my head is now
swimming with information.

My scenario seems reasonably straightforward : I would like to have two
file servers which mirror each others' data, then i'd like those two
servers to act as a cluster, whereby they serve said data as if they
were one machine.  If one of the servers suffers a critical failure,
the other will stay up, and the data will continue to be accessible to
the rest of the network.

I note with some trepidation that this might not be possible, as per
this document :
http://www.redhat.com/docs/manuals/enterprise/RHEL-5-manual/en-US/RHEL510/Cluster_Logical_Volume_Manager/mirrored_volumes.html

However, i don't know if that document relates to the same scenario
i've described above.  I would very much appreciate any and all
feedback, links to further documentation, and any other information
that you might like to share.

Thank you !


-- 
Daniel Maher dma AT witbe.net


signature.asc
Description: PGP signature
--
Linux-cluster mailing list
Linux-cluster@redhat.com
https://www.redhat.com/mailman/listinfo/linux-cluster

Re: [Linux-cluster] Unformatting a GFS cluster disk

2008-03-31 Thread DRand
Hi, 

We're making some progress on recovering our GFS disk..

We've created a small c app, which marked all bytes as 0xff on:

grp hdr: with offset 80
grp blk: with offset 120

But fsck takes ages to run (looks like weeks on our full disk). If we run 
on a smaller disk the process completes faster but after that, device is 
non mountable.

So, we're thinking about an alternate strategy. What if we try and 
reconstruct the GFS headers to tell it where old file group structures are 
located. ie. I can identity all (old and new) filegroups on the disk. I 
then change the header structures so they point to the old file groups (at 
least the ones that were not overwritten by the previous mkfs) rather than 
the new file groups. 

Does this approach make sense? Where do I update the GFS headers to tell 
the system where the old file group headers are located? 

Damon.
Working to protect human rights worldwide

DISCLAIMER
Internet communications are not secure and therefore Amnesty International Ltd 
does not accept legal responsibility for the contents of this message. If you 
are not the intended recipient you must not disclose or rely on the information 
in this e-mail. Any views or opinions presented are solely those of the author 
and do not necessarily represent those of Amnesty International Ltd unless 
specifically stated. Electronic communications including email might be 
monitored by Amnesty International Ltd. for operational or business reasons.

This message has been scanned for viruses by Postini.
www.postini.com
--
Linux-cluster mailing list
Linux-cluster@redhat.com
https://www.redhat.com/mailman/listinfo/linux-cluster

RE: [Linux-cluster] (newbie) mirrored data / cluster ?

2008-03-31 Thread MARTI, ROBERT JESSE
You don't have to have a mirrored LVM to do what youre trying to do.
You just need a common mountable share - typically a SAN or NAS.  It
shouldn't be too hard to configure (and I've already done it).  You
don't even *have* to have cluster suite - if you have a load balancer.
My brain isn't fast enough today to figure out how to share a load
without a load balanced VIP or a DNS round robin (which should be easy
to do as well).

Rob Marti
Systems Analyst II
Sam Houston State University

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Daniel Maher
Sent: Monday, March 31, 2008 12:40 PM
To: linux-cluster@redhat.com
Subject: [Linux-cluster] (newbie) mirrored data / cluster ?

Hello all,

I have spent the day reading through the mailing list archives, Redhat
documentation, and CentOS forums, and - to be frank - my head is now
swimming with information.

My scenario seems reasonably straightforward : I would like to have two
file servers which mirror each others' data, then i'd like those two
servers to act as a cluster, whereby they serve said data as if they
were one machine.  If one of the servers suffers a critical failure, the
other will stay up, and the data will continue to be accessible to the
rest of the network.

I note with some trepidation that this might not be possible, as per
this document :
http://www.redhat.com/docs/manuals/enterprise/RHEL-5-manual/en-US/RHEL51
0/Cluster_Logical_Volume_Manager/mirrored_volumes.html

However, i don't know if that document relates to the same scenario i've
described above.  I would very much appreciate any and all feedback,
links to further documentation, and any other information that you might
like to share.

Thank you !


--
Daniel Maher dma AT witbe.net

--
Linux-cluster mailing list
Linux-cluster@redhat.com
https://www.redhat.com/mailman/listinfo/linux-cluster


Re: [Linux-cluster] (newbie) mirrored data / cluster ?

2008-03-31 Thread Chris Harms
The non-SAN option would be to use DRBD (http://www.drbd.org) and put 
NFS, Samba, etc on top of the DRBD partition.


Chris

MARTI, ROBERT JESSE wrote:

You don't have to have a mirrored LVM to do what youre trying to do.
You just need a common mountable share - typically a SAN or NAS.  It
shouldn't be too hard to configure (and I've already done it).  You
don't even *have* to have cluster suite - if you have a load balancer.
My brain isn't fast enough today to figure out how to share a load
without a load balanced VIP or a DNS round robin (which should be easy
to do as well).

Rob Marti
Systems Analyst II
Sam Houston State University

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Daniel Maher
Sent: Monday, March 31, 2008 12:40 PM
To: linux-cluster@redhat.com
Subject: [Linux-cluster] (newbie) mirrored data / cluster ?

Hello all,

I have spent the day reading through the mailing list archives, Redhat
documentation, and CentOS forums, and - to be frank - my head is now
swimming with information.

My scenario seems reasonably straightforward : I would like to have two
file servers which mirror each others' data, then i'd like those two
servers to act as a cluster, whereby they serve said data as if they
were one machine.  If one of the servers suffers a critical failure, the
other will stay up, and the data will continue to be accessible to the
rest of the network.

I note with some trepidation that this might not be possible, as per
this document :
http://www.redhat.com/docs/manuals/enterprise/RHEL-5-manual/en-US/RHEL51
0/Cluster_Logical_Volume_Manager/mirrored_volumes.html

However, i don't know if that document relates to the same scenario i've
described above.  I would very much appreciate any and all feedback,
links to further documentation, and any other information that you might
like to share.

Thank you !


--
Daniel Maher dma AT witbe.net

--
Linux-cluster mailing list
Linux-cluster@redhat.com
https://www.redhat.com/mailman/listinfo/linux-cluster
  


--
Linux-cluster mailing list
Linux-cluster@redhat.com
https://www.redhat.com/mailman/listinfo/linux-cluster


[Linux-cluster] drbd and gfs

2008-03-31 Thread Tiago Cruz
Hi,

I'm trying to do the follow setup:

http://photos1.blogger.com/blogger/2723/1994/1600/plan.jpg

Extracted from this link:
http://linuxsutra.blogspot.com/2006/11/howto-gfs-gnbd.html


But, my GNBD Server has one dom0 and my nodes has two domU's, all
running RHEL 5.1 x86_84.

10.25.2.1   gnbdserv.mycluster.com gnbdserv
10.25.0.251 node1.mycluster.com
10.25.0.252 node2.mycluster.com

My server.conf is this:
http://photos1.blogger.com/blogger/2723/1994/1600/clusterconf.jpg

But I'm getting a lot of logs like this:

Mar 31 15:37:24 teste-spo-la-v1 fenced[1530]: fencing node 
gnbdserv.mycluster.com
Mar 31 15:37:24 teste-spo-la-v1 fenced[1530]: agent fence_gnbd reports: 
warning: 'ipaddr' key is depricated, please see man page failed: missing server 
list 
Mar 31 15:37:24 teste-spo-la-v1 fenced[1530]: fence gnbdserv.mycluster.com 
failed

Ask 1-): I'm already seeing the device exported from the server on my
nodes, but I still needing that the dom0 as part of cluster?

[EMAIL PROTECTED] ~]# gnbd_import -n -l
Device name : cluster
--
Minor # : 0
 sysfs name : /block/gnbd0
 Server : gnbdserv
   Port : 14567
  State : Close Connected Clear
   Readonly : No
Sectors : 20971520

[EMAIL PROTECTED] ~]# cman_tool nodes
Node  Sts   Inc   Joined   Name
   1   X  0gnbdserv.mycluster.com
   2   M 24   2008-03-31 12:20:27  node1.mycluster.com
   3   M 16   2008-03-31 12:20:27  node2.mycluster.com



Ask 2-) I've formated on this way:

# mkfs.gfs2 -t mycluster:root -p lock_dlm -j 2 /dev/Vol_LVM/mycluster

But I still can't mount the device:

[EMAIL PROTECTED] ~]# mount -v /dev/gnbd/cluster /mnt/
mount: you didn't specify a filesystem type for /dev/gnbd/cluster
   I will try type gfs2
/sbin/mount.gfs2: mount /dev/gnbd/cluster /mnt
/sbin/mount.gfs2: parse_opts: opts = rw
/sbin/mount.gfs2:   clear flag 1 for rw, flags = 0
/sbin/mount.gfs2: parse_opts: flags = 0
/sbin/mount.gfs2: parse_opts: extra = 
/sbin/mount.gfs2: parse_opts: hostdata = 
/sbin/mount.gfs2: parse_opts: lockproto = 
/sbin/mount.gfs2: parse_opts: locktable = 
/sbin/mount.gfs2: message to gfs_controld: asking to join mountgroup:
/sbin/mount.gfs2: write join /mnt gfs2 lock_dlm mycluster:root rw 
/dev/gnbd/cluster
/sbin/mount.gfs2: node not a member of the default fence domain
/sbin/mount.gfs2: error mounting lockproto lock_dlm


What can I do to fix this?

Thanks!!

-- 
Tiago Cruz
http://everlinux.com
Linux User #282636


--
Linux-cluster mailing list
Linux-cluster@redhat.com
https://www.redhat.com/mailman/listinfo/linux-cluster


[Linux-cluster] GNBD and GFS (was: drbd and gfs)

2008-03-31 Thread Tiago Cruz
Hello all,

Sorry... wrong subject! I was with drbd on my head, but I don't like
him because I just can use 2 (two) nodes. The correct subject it's
gnbd and GFS!

I hope that someone can help me!
Regards

On Mon, 2008-03-31 at 15:52 -0300, Tiago Cruz wrote:
 Hi,
 
 I'm trying to do the follow setup:
 
 http://photos1.blogger.com/blogger/2723/1994/1600/plan.jpg
 
 Extracted from this link:
 http://linuxsutra.blogspot.com/2006/11/howto-gfs-gnbd.html
 
 
 But, my GNBD Server has one dom0 and my nodes has two domU's, all
 running RHEL 5.1 x86_84.
 
 10.25.2.1   gnbdserv.mycluster.com gnbdserv
 10.25.0.251 node1.mycluster.com
 10.25.0.252 node2.mycluster.com
 
 My server.conf is this:
 http://photos1.blogger.com/blogger/2723/1994/1600/clusterconf.jpg
 
 But I'm getting a lot of logs like this:
 
 Mar 31 15:37:24 teste-spo-la-v1 fenced[1530]: fencing node 
 gnbdserv.mycluster.com
 Mar 31 15:37:24 teste-spo-la-v1 fenced[1530]: agent fence_gnbd reports: 
 warning: 'ipaddr' key is depricated, please see man page failed: missing 
 server list 
 Mar 31 15:37:24 teste-spo-la-v1 fenced[1530]: fence gnbdserv.mycluster.com 
 failed
 
 Ask 1-): I'm already seeing the device exported from the server on my
 nodes, but I still needing that the dom0 as part of cluster?
 
 [EMAIL PROTECTED] ~]# gnbd_import -n -l
 Device name : cluster
 --
 Minor # : 0
  sysfs name : /block/gnbd0
  Server : gnbdserv
Port : 14567
   State : Close Connected Clear
Readonly : No
 Sectors : 20971520
 
 [EMAIL PROTECTED] ~]# cman_tool nodes
 Node  Sts   Inc   Joined   Name
1   X  0gnbdserv.mycluster.com
2   M 24   2008-03-31 12:20:27  node1.mycluster.com
3   M 16   2008-03-31 12:20:27  node2.mycluster.com
 
 
 
 Ask 2-) I've formated on this way:
 
 # mkfs.gfs2 -t mycluster:root -p lock_dlm -j 2 /dev/Vol_LVM/mycluster
 
 But I still can't mount the device:
 
 [EMAIL PROTECTED] ~]# mount -v /dev/gnbd/cluster /mnt/
 mount: you didn't specify a filesystem type for /dev/gnbd/cluster
I will try type gfs2
 /sbin/mount.gfs2: mount /dev/gnbd/cluster /mnt
 /sbin/mount.gfs2: parse_opts: opts = rw
 /sbin/mount.gfs2:   clear flag 1 for rw, flags = 0
 /sbin/mount.gfs2: parse_opts: flags = 0
 /sbin/mount.gfs2: parse_opts: extra = 
 /sbin/mount.gfs2: parse_opts: hostdata = 
 /sbin/mount.gfs2: parse_opts: lockproto = 
 /sbin/mount.gfs2: parse_opts: locktable = 
 /sbin/mount.gfs2: message to gfs_controld: asking to join mountgroup:
 /sbin/mount.gfs2: write join /mnt gfs2 lock_dlm mycluster:root rw 
 /dev/gnbd/cluster
 /sbin/mount.gfs2: node not a member of the default fence domain
 /sbin/mount.gfs2: error mounting lockproto lock_dlm
 
 
 What can I do to fix this?
 
 Thanks!!
 
-- 
Tiago Cruz
http://everlinux.com
Linux User #282636


--
Linux-cluster mailing list
Linux-cluster@redhat.com
https://www.redhat.com/mailman/listinfo/linux-cluster


[Linux-cluster] Using GFS and DLM without RHCS

2008-03-31 Thread Danny Wall
I was wondering if it is possible to run GFS on several machines with a
shared GFS LUN, but not use full clustering like RHCS. From the FAQs:


Can I setup GFS on a single node and then add additional nodes later? 

Yes you can. For the initial single node setup, simply setup GFS
using the nolock locking method. Make sure you create the file
system with enough journals to support the number of nodes you
wish to add later. (If you do not add enough, you can add
journals later, but you must add additional space to the volume
GFS is on to do so.) 

Once you want to add more nodes, you need to setup the cluster
infrastructure just as you would in an initial multi-node
configuration. You also need to modify the gfs superblock with
gfs_tool to switch it to a multi-node locking setup. Use the
values you would have given to gfs_mkfs - instead of the '-p '
flag to mkfs, use 'gfs_tool sb  proto ', and instead of the '-t
' flag to mkfs, use 'gfs_tool sb  table '. 

Once these changes and additions are made, fire up the cluster
infrastructure and mount GFS. 


I would assume the answer is no, but since this page was published in
2004, I was hoping it is now possible. I would prefer to have a Cisco
CSS front the servers and send clients to the preferred avaiable server
for SAMBA shares, as long as the service is available on that server. If
not, it could re-direct to a different server that is available. This
would simplify the servers by not requiring clustering, and they would
only require GFS and DLM for locking. Ideally, when SAMBA 4 is released
with the ability to load balance the workload, I could allow the Cisco
CSS to do full load balancing. Until then, it would simply act like a
DNS change by talking to one server or the other.

I have had a few problems with RHCS, and while it has done its job most
of the time, if I can simplify the set up by simply moving an IP, it
would be easier to manage and potentially more reliable. Fencing could
be available, but if only one server is used at a time, would it be
needed? The only other access to the disk I can think of, would be for
backups reading from another node. Any suggestions would be helpful.

Thanks
Danny Wall



#
This message is for the named person's use only.  It may 
contain private, proprietary, or legally privileged information.  
No privilege is waived or lost by any mistransmission.  If you 
receive this message in error, please immediately delete it and 
all copies of it from your system, destroy any hard copies of it, 
and notify the sender.  You must not, directly or indirectly, use, 
disclose, distribute, print, or copy any part of this message if you 
are not the intended recipient.  Health First reserves the right to 
monitor all e-mail communications through its networks.  Any views 
or opinions expressed in this message are solely those of the 
individual sender, except (1) where the message states such views 
or opinions are on behalf of a particular entity;  and (2) the sender 
is authorized by the entity to give such views or opinions.
#


--
Linux-cluster mailing list
Linux-cluster@redhat.com
https://www.redhat.com/mailman/listinfo/linux-cluster


Re: [Linux-cluster] Using GFS and DLM without RHCS

2008-03-31 Thread Gordan Bobic

Danny Wall wrote:

I was wondering if it is possible to run GFS on several machines with a
shared GFS LUN, but not use full clustering like RHCS. From the FAQs:


First of all, what's the problem with having RHCS running? It doesn't 
mean you have to use it to handle resources failing over. You can run it 
all in active/active setup with load balancing in front.


If this is not an acceptable solution for you and you still cannot be 
bothered to create cluster.conf (and that is all that is required), you 
can always use OCFS2. This doesn't have a cluster component (it's 
totally unrelated to RHCS), but you still have to create the equivalent 
config, so you won't be saving yourself any effort.


Gordan

--
Linux-cluster mailing list
Linux-cluster@redhat.com
https://www.redhat.com/mailman/listinfo/linux-cluster