Re: [Samba] CTDB+GFS2+CMAN. clean_start=0 or clean_start=1?

2009-08-17 Thread Yauheni Labko
I've tried to get OCFS2 1.4.1 with CTDB but no success. Maybe you give me idea 
what I did wrong.

We have 2 nodes. Both nodes are running on Debian/Lenny. I've tried 2.6.26 and 
backported 2.6.29/2.6.30. The access to OCFS2 partition is by iscsi.

The configuration file on both nodes:
smb01:~# cat /etc/ocfs2/cluster.conf
cluster:
  node_count = 2
  name = smb-cluster

node:
  ip_port= 
  ip_address = 10.0.1.2
  number = 1
  name = smb01
  cluster = smb-cluster

node:
  ip_port= 
  ip_address = 10.0.1.3
  number = 2
  name = smb02
  cluster = smb-cluster

All partitions are mounted:
/dev/sdb1 on /smb-ocfs2 type ocfs2 (rw,_netdev,heartbeat=local)
/dev/sdc1 on /smb-ctdb-ocfs2 type ocfs2 (rw,_netdev,heartbeat=local)

CTDB puts locking file on /smb-ctdb-ocfs2/.ctdb_locking.
When I starts CTDB on both nodes I have in log:

server/ctdb_recover.c:634 Recovery mode set to NORMAL
ctdb_recovery_lock: Got recovery lock on '/smb-ctdb-ocfs2/.ctdb_locking'
ERROR: recovery lock file /smb-ctdb-ocfs2/.ctdb_locking not locked when 
recovering!
server/ctdb_recover.c:968 startrecovery eventscript has been invoked

What configuration did you use? Or rather to say what besides CTDB you used? 
I've met a post that pacemaker should be used to get CTDB worked with OCFS2. 
In what context pacemaker is used?

Yauheni Labko (Eugene Lobko)
Junior System Administrator
Chapdelaine  Co.
(212)208-9150

On Wednesday 12 August 2009 01:17:46 pm Jim McDonough wrote:
 On Tue, Aug 11, 2009 at 11:10 PM, Michael Adamob...@samba.org wrote:
  Btw, i thought OCFS2 is not ready to use with CTDB due to the lacks of
  some features. This was primary reason why I started  with GFS.
 
  OCFS2 was lacking support of POSIX fcntl byte range locks (which
  are required to run ctdb) until recently. But this has changed!
  I have not tried it myself, but I think Jim McDonough
  (j...@samba.org, I have added him to Cc) might be able to give
  you some details (versions and such).

 OCFS2 supports posix fcntl byte range locks since 1.4, and I've been
 running ctdb on 1.4.1.


-- 
To unsubscribe from this list go to the following URL and read the
instructions:  https://lists.samba.org/mailman/options/samba


Re: [Samba] CTDB+GFS2+CMAN. clean_start=0 or clean_start=1?

2009-08-17 Thread Yauheni Labko
Thank you Michael. I tried OCFS2. OCFS2 administration looks easier than GFS 
one.

Yauheni Labko (Eugene Lobko)
Junior System Administrator
Chapdelaine  Co.
(212)208-9150

On Tuesday 11 August 2009 05:10:22 pm Michael Adam wrote:
 Yauheni Labko wrote:
  Thank you for the answer, Michael.
 
  As far as I understood clean_star=1 is absolutely ok for GFS/GFS2?

 Sorry, I am not an expert in GFS settings. (But read on...)

  CTDB is not going to work without Red Hat Cluster manage. CMAN starts
  dlm_controld and gfs_controld. ccsd handles node-to-node communication.

 Well GFS needs the cman processes, so CTDB needs them, too.
 But CTDB only uses one lock file in the cluster file system.
 Apart from that, the CTDB daemons communicate with each other
 via tcp all on their own.

  I think GPFS has the similar manager like CMAN. The clean_start=1 is
  the only setting which can provide the necessary access to the GFS/GFS2
  partitions as CTDB required. Correct me if I'm wrong.

 Sorry again. CTDB is completely ignorant with respect to GFS or
 CMAN configuration options. It only needs a cluster file system
 that supports POSIX fcntl() byte range locks. CTDB basicall treats
 the file system as a black box.
 So CTDB does not care about the value clean_start as such. Just make
 sure you don't sure that you don't start ctdbd before the cman
 stuff is up and running and the  GFS file system is mounted.

  Btw, i thought OCFS2 is not ready to use with CTDB due to the lacks of
  some features. This was primary reason why I started  with GFS.

 OCFS2 was lacking support of POSIX fcntl byte range locks (which
 are required to run ctdb) until recently. But this has changed!
 I have not tried it myself, but I think Jim McDonough
 (j...@samba.org, I have added him to Cc) might be able to give
 you some details (versions and such).

  I left manual fencing for testing only. I was going to use iLO in
  production.

 OK.

 Hope this somewhat helps... :-)

 Cheers - Michael

  Yauheni Labko (Eugene Lobko)
  Junior System Administrator
  Chapdelaine  Co.
  (212)208-9150
 
   CTDB is pretty ignorant of CMAN as such.
   It just relies on a cluster file system, like GFS2.
  
  
   So you should only start ctdbd when the cluster is up
   and the gfs2 file system is mounted. I think you should
   not start ctdbd as a cluster service managed by cman,
   since ctdbd can be considered a cluster manager for
   certain services (like samba...) itself. Apart from
   that, ctdb should be considered pretty much independent
   of the red hat cluster manager.
  
   CTDB needs a file in the cluster file system, the
   recovery lock file. The location of this file (or a
   directory, in which such a file can be created) should
   be specified in the CTDB_RECOVERY_LOCK=... setting
   in /etc/sysconfig/ctdb.
  
   At a glance, your cluster.conf looks sane, but
   I think manual fencing can be a real problem with
   cman.
  
   GPFS is very well tested with ctdb.
   I think there are many people testing ctdb with gfs2.
   I have heard positive feedback of people using ctdb
   with GlusterFS and lustre (and recently with ocfs2).
  
   You might want to join the #ctdb irc channel on freenode.
   There are ususally some people around with more expertise
   in gfs2 than me.
  
   Cheers - Michael
  
   Yauheni Labko wrote:
Hi everybody,
   
I have tested CTDB+GFS2+CMAN under Debian. It works good but I do not
understand some points.
It is possible to run the CTDB defining it under services section in
cluster.conf but running it on the second node shuts down the process
at the first one. My CTDB configuration implies 2 active-active
nodes.
   
Does CTDB care if the node starts with clean_start=0 or
clean_start=1? man fenced says this is a safe way especially during
startup because it prevents a data corruption if a node was dead for
some reason. From my understanding CTDB uses CMAN only as module to
get access to gfs/gfs2 partitions. Or maybe it is better to look at
GPFS and LustreFS?
   
Could anybody show the working configuration of cluster.conf for
CTDB+GFS2+CMAN?
   
I used the following cluster.conf and ctd conf:
   
?xml version=1.0?
cluster name=smb-cluster config_version=8
  fence_daemon clean_start=0 post_fail_delay=0
post_join_delay=3/ cman expected_votes=1 two_node=1/
  cman cluster_id=101/
  clusternodes
clusternode name=smb01 votes=1 nodeid=1
  fence
!-- Handle fencing manually --
method name=human
  device name=human nodename=smb01/
/method
  /fence
/clusternode
clusternode name=smb02 votes=1 nodeid=2
  fence
!-- Handle fencing manually --
method name=human
  device name=human nodename=smb02/
/method
  /fence
/clusternode
  /clusternodes
  fencedevices
!-- Define manual fencing 

Re: [Samba] CTDB+GFS2+CMAN. clean_start=0 or clean_start=1?

2009-08-17 Thread Jim McDonough
On Mon, Aug 17, 2009 at 8:59 AM, Yauheni Labkoy...@chappy.com wrote:
 I've tried to get OCFS2 1.4.1 with CTDB but no success. Maybe you give me idea
 what I did wrong.
It looks like you're using the ocfs2 standalone kernel cluster stack.
This one doesn't support the locks CTDB needs.  You'll need to use
pacemaker, so setup will be a bit bigger.

However, take a look at
http://www.novell.com/documentation/sle_ha/book_sleha/?page=/documentation/sle_ha/book_sleha/data/book_sleha.html

The section on seting up ocfs2 has a cookbook to follow for doing the
right cluster commands.  I don't know exactly what packages you'll
need on Debian, or if the appropriate levels are available in packages
there.  The SHA1 I gave you earlier should show whether the kernel
ocfs2 module even supports the locks.

-- 
Jim McDonough
Samba Team
SUSE labs
jmcd at samba dot org
jmcd at themcdonoughs dot org
-- 
To unsubscribe from this list go to the following URL and read the
instructions:  https://lists.samba.org/mailman/options/samba


Re: [Samba] CTDB+GFS2+CMAN. clean_start=0 or clean_start=1?

2009-08-12 Thread Jim McDonough
On Tue, Aug 11, 2009 at 11:10 PM, Michael Adamob...@samba.org wrote:
 Btw, i thought OCFS2 is not ready to use with CTDB due to the lacks of some
 features. This was primary reason why I started  with GFS.

 OCFS2 was lacking support of POSIX fcntl byte range locks (which
 are required to run ctdb) until recently. But this has changed!
 I have not tried it myself, but I think Jim McDonough
 (j...@samba.org, I have added him to Cc) might be able to give
 you some details (versions and such).
OCFS2 supports posix fcntl byte range locks since 1.4, and I've been
running ctdb on 1.4.1.


-- 
Jim McDonough
Samba Team
SUSE labs
jmcd at samba dot org
jmcd at themcdonoughs dot org
-- 
To unsubscribe from this list go to the following URL and read the
instructions:  https://lists.samba.org/mailman/options/samba


Re: [Samba] CTDB+GFS2+CMAN. clean_start=0 or clean_start=1?

2009-08-12 Thread Jim McDonough
On Wed, Aug 12, 2009 at 7:17 PM, Jim McDonoughj...@samba.org wrote:
 OCFS2 supports posix fcntl byte range locks since 1.4, and I've been
 running ctdb on 1.4.1.

Let me modify that statement a bit...it's on SLES11.  I've been told
that there is no oss.oracle.com release yet containing that code.

-- 
Jim McDonough
Samba Team
SUSE labs
jmcd at samba dot org
jmcd at themcdonoughs dot org
-- 
To unsubscribe from this list go to the following URL and read the
instructions:  https://lists.samba.org/mailman/options/samba


Re: [Samba] CTDB+GFS2+CMAN. clean_start=0 or clean_start=1?

2009-08-12 Thread Jim McDonough
On Wed, Aug 12, 2009 at 7:28 PM, Jim McDonoughj...@samba.org wrote:
 On Wed, Aug 12, 2009 at 7:17 PM, Jim McDonoughj...@samba.org wrote:
 OCFS2 supports posix fcntl byte range locks since 1.4, and I've been
 running ctdb on 1.4.1.

 Let me modify that statement a bit...it's on SLES11.  I've been told
 that there is no oss.oracle.com release yet containing that code.
One additional piece:
The SHA1 for the posix locking code commit is
53da4939f349d4edd283b043219221ca5b78e4d4 in mainline.

-- 
Jim McDonough
Samba Team
SUSE labs
jmcd at samba dot org
jmcd at themcdonoughs dot org
-- 
To unsubscribe from this list go to the following URL and read the
instructions:  https://lists.samba.org/mailman/options/samba


Re: [Samba] CTDB+GFS2+CMAN. clean_start=0 or clean_start=1?

2009-08-11 Thread Michael Adam
Yauheni Labko wrote:
 Thank you for the answer, Michael.
 
 As far as I understood clean_star=1 is absolutely ok for GFS/GFS2?

Sorry, I am not an expert in GFS settings. (But read on...)

 CTDB is not going to work without Red Hat Cluster manage. CMAN starts 
 dlm_controld and gfs_controld. ccsd handles node-to-node communication.

Well GFS needs the cman processes, so CTDB needs them, too.
But CTDB only uses one lock file in the cluster file system.
Apart from that, the CTDB daemons communicate with each other
via tcp all on their own.

 I think GPFS has the similar manager like CMAN. The clean_start=1 is the 
 only 
 setting which can provide the necessary access to the GFS/GFS2 partitions as 
 CTDB required. Correct me if I'm wrong.

Sorry again. CTDB is completely ignorant with respect to GFS or
CMAN configuration options. It only needs a cluster file system
that supports POSIX fcntl() byte range locks. CTDB basicall treats
the file system as a black box.
So CTDB does not care about the value clean_start as such. Just make
sure you don't sure that you don't start ctdbd before the cman
stuff is up and running and the  GFS file system is mounted.

 Btw, i thought OCFS2 is not ready to use with CTDB due to the lacks of some 
 features. This was primary reason why I started  with GFS.

OCFS2 was lacking support of POSIX fcntl byte range locks (which
are required to run ctdb) until recently. But this has changed!
I have not tried it myself, but I think Jim McDonough
(j...@samba.org, I have added him to Cc) might be able to give
you some details (versions and such).

 I left manual fencing for testing only. I was going to use iLO in production.

OK.

Hope this somewhat helps... :-)

Cheers - Michael

 Yauheni Labko (Eugene Lobko)
 Junior System Administrator
 Chapdelaine  Co.
 (212)208-9150
 
  CTDB is pretty ignorant of CMAN as such.
  It just relies on a cluster file system, like GFS2.
 
 
  So you should only start ctdbd when the cluster is up
  and the gfs2 file system is mounted. I think you should
  not start ctdbd as a cluster service managed by cman,
  since ctdbd can be considered a cluster manager for
  certain services (like samba...) itself. Apart from
  that, ctdb should be considered pretty much independent
  of the red hat cluster manager.
 
  CTDB needs a file in the cluster file system, the
  recovery lock file. The location of this file (or a
  directory, in which such a file can be created) should
  be specified in the CTDB_RECOVERY_LOCK=... setting
  in /etc/sysconfig/ctdb.
 
  At a glance, your cluster.conf looks sane, but
  I think manual fencing can be a real problem with
  cman.
 
  GPFS is very well tested with ctdb.
  I think there are many people testing ctdb with gfs2.
  I have heard positive feedback of people using ctdb
  with GlusterFS and lustre (and recently with ocfs2).
 
  You might want to join the #ctdb irc channel on freenode.
  There are ususally some people around with more expertise
  in gfs2 than me.
 
  Cheers - Michael
 
  Yauheni Labko wrote:
   Hi everybody,
  
   I have tested CTDB+GFS2+CMAN under Debian. It works good but I do not
   understand some points.
   It is possible to run the CTDB defining it under services section in
   cluster.conf but running it on the second node shuts down the process at
   the first one. My CTDB configuration implies 2 active-active nodes.
  
   Does CTDB care if the node starts with clean_start=0 or
   clean_start=1? man fenced says this is a safe way especially during
   startup because it prevents a data corruption if a node was dead for some
   reason. From my understanding CTDB uses CMAN only as module to get
   access to gfs/gfs2 partitions. Or maybe it is better to look at GPFS and
   LustreFS?
  
   Could anybody show the working configuration of cluster.conf for
   CTDB+GFS2+CMAN?
  
   I used the following cluster.conf and ctd conf:
  
   ?xml version=1.0?
   cluster name=smb-cluster config_version=8
 fence_daemon clean_start=0 post_fail_delay=0 post_join_delay=3/
 cman expected_votes=1 two_node=1/
 cman cluster_id=101/
 clusternodes
   clusternode name=smb01 votes=1 nodeid=1
 fence
   !-- Handle fencing manually --
   method name=human
 device name=human nodename=smb01/
   /method
 /fence
   /clusternode
   clusternode name=smb02 votes=1 nodeid=2
 fence
   !-- Handle fencing manually --
   method name=human
 device name=human nodename=smb02/
   /method
 /fence
   /clusternode
 /clusternodes
 fencedevices
   !-- Define manual fencing --
   fencedevice name=human agent=fence_manual/
   !-- Define ilo fencing --
   fencedevice name=ilo agent=fence_ilo login=admin
   password=foo/ /fencedevices
   /cluster
  
   # Options to ctdbd. This is read by /etc/init.d/ctdb
   CTDB_RECOVERY_LOCK=/smb-ctdb/.ctdb_locking
   CTDB_PUBLIC_INTERFACE=eth2
   

[Samba] CTDB+GFS2+CMAN. clean_start=0 or clean_start=1?

2009-08-03 Thread Yauheni Labko
Hi everybody,

I have tested CTDB+GFS2+CMAN under Debian. It works good but I do not 
understand some points.
It is possible to run the CTDB defining it under services section in 
cluster.conf but running it on the second node shuts down the process at the 
first one. My CTDB configuration implies 2 active-active nodes.

Does CTDB care if the node starts with clean_start=0 or clean_start=1? man 
fenced says this is a safe way especially during startup because it prevents 
a data corruption if a node was dead for some reason. From my understanding 
CTDB uses CMAN only as module to get access to gfs/gfs2 partitions. Or 
maybe it is better to look at GPFS and LustreFS?

Could anybody show the working configuration of cluster.conf for 
CTDB+GFS2+CMAN? 

I used the following cluster.conf and ctd conf:

?xml version=1.0?
cluster name=smb-cluster config_version=8
  fence_daemon clean_start=0 post_fail_delay=0 post_join_delay=3/
  cman expected_votes=1 two_node=1/
  cman cluster_id=101/
  clusternodes
clusternode name=smb01 votes=1 nodeid=1
  fence
!-- Handle fencing manually --
method name=human
  device name=human nodename=smb01/
/method
  /fence
/clusternode
clusternode name=smb02 votes=1 nodeid=2
  fence
!-- Handle fencing manually --
method name=human
  device name=human nodename=smb02/
/method
  /fence
/clusternode
  /clusternodes
  fencedevices
!-- Define manual fencing --
fencedevice name=human agent=fence_manual/
!-- Define ilo fencing --
fencedevice name=ilo agent=fence_ilo login=admin password=foo/
  /fencedevices
/cluster

# Options to ctdbd. This is read by /etc/init.d/ctdb
CTDB_RECOVERY_LOCK=/smb-ctdb/.ctdb_locking
CTDB_PUBLIC_INTERFACE=eth2
CTDB_PUBLIC_ADDRESSES=/etc/ctdb/public_addresses
CTDB_MANAGES_SAMBA=yes
CTDB_INIT_STYLE=ubuntu
CTDB_NODES=/etc/ctdb/nodes
CTDB_NOTIFY_SCRIPT=/etc/ctdb/notify.sh
CTDB_DBDIR=/var/ctdb
CTDB_DBDIR_PERSISTENT=/var/ctdb/persistent
CTDB_SOCKET=/tmp/ctdb.socket
CTDB_LOGFILE=/var/log/ctdb.log
CTDB_DEBUGLEVEL=2

Yauheni Labko (Eugene Lobko)
Junior System Administrator
Chapdelaine  Co.
(212)208-9150

-- 
To unsubscribe from this list go to the following URL and read the
instructions:  https://lists.samba.org/mailman/options/samba


Re: [Samba] CTDB+GFS2+CMAN. clean_start=0 or clean_start=1?

2009-08-03 Thread Michael Adam
Hi,

CTDB is pretty ignorant of CMAN as such.
It just relies on a cluster file system, like GFS2.

So you should only start ctdbd when the cluster is up
and the gfs2 file system is mounted. I think you should
not start ctdbd as a cluster service managed by cman,
since ctdbd can be considered a cluster manager for
certain services (like samba...) itself. Apart from
that, ctdb should be considered pretty much independent
of the red hat cluster manager.

CTDB needs a file in the cluster file system, the
recovery lock file. The location of this file (or a
directory, in which such a file can be created) should
be specified in the CTDB_RECOVERY_LOCK=... setting
in /etc/sysconfig/ctdb.

At a glance, your cluster.conf looks sane, but
I think manual fencing can be a real problem with
cman.

GPFS is very well tested with ctdb.
I think there are many people testing ctdb with gfs2.
I have heard positive feedback of people using ctdb
with GlusterFS and lustre (and recently with ocfs2).

You might want to join the #ctdb irc channel on freenode.
There are ususally some people around with more expertise
in gfs2 than me.

Cheers - Michael

Yauheni Labko wrote:
 Hi everybody,
 
 I have tested CTDB+GFS2+CMAN under Debian. It works good but I do not 
 understand some points.
 It is possible to run the CTDB defining it under services section in 
 cluster.conf but running it on the second node shuts down the process at the 
 first one. My CTDB configuration implies 2 active-active nodes.
 
 Does CTDB care if the node starts with clean_start=0 or clean_start=1? 
 man 
 fenced says this is a safe way especially during startup because it prevents 
 a data corruption if a node was dead for some reason. From my understanding 
 CTDB uses CMAN only as module to get access to gfs/gfs2 partitions. Or 
 maybe it is better to look at GPFS and LustreFS?
 
 Could anybody show the working configuration of cluster.conf for 
 CTDB+GFS2+CMAN? 
 
 I used the following cluster.conf and ctd conf:
 
 ?xml version=1.0?
 cluster name=smb-cluster config_version=8
   fence_daemon clean_start=0 post_fail_delay=0 post_join_delay=3/
   cman expected_votes=1 two_node=1/
   cman cluster_id=101/
   clusternodes
 clusternode name=smb01 votes=1 nodeid=1
   fence
 !-- Handle fencing manually --
 method name=human
   device name=human nodename=smb01/
 /method
   /fence
 /clusternode
 clusternode name=smb02 votes=1 nodeid=2
   fence
 !-- Handle fencing manually --
 method name=human
   device name=human nodename=smb02/
 /method
   /fence
 /clusternode
   /clusternodes
   fencedevices
 !-- Define manual fencing --
 fencedevice name=human agent=fence_manual/
 !-- Define ilo fencing --
 fencedevice name=ilo agent=fence_ilo login=admin password=foo/
   /fencedevices
 /cluster
 
 # Options to ctdbd. This is read by /etc/init.d/ctdb
 CTDB_RECOVERY_LOCK=/smb-ctdb/.ctdb_locking
 CTDB_PUBLIC_INTERFACE=eth2
 CTDB_PUBLIC_ADDRESSES=/etc/ctdb/public_addresses
 CTDB_MANAGES_SAMBA=yes
 CTDB_INIT_STYLE=ubuntu
 CTDB_NODES=/etc/ctdb/nodes
 CTDB_NOTIFY_SCRIPT=/etc/ctdb/notify.sh
 CTDB_DBDIR=/var/ctdb
 CTDB_DBDIR_PERSISTENT=/var/ctdb/persistent
 CTDB_SOCKET=/tmp/ctdb.socket
 CTDB_LOGFILE=/var/log/ctdb.log
 CTDB_DEBUGLEVEL=2
 
 Yauheni Labko (Eugene Lobko)
 Junior System Administrator
 Chapdelaine  Co.
 (212)208-9150


pgpRIIWhwBTxP.pgp
Description: PGP signature
-- 
To unsubscribe from this list go to the following URL and read the
instructions:  https://lists.samba.org/mailman/options/samba

Re: [Samba] CTDB+GFS2+CMAN. clean_start=0 or clean_start=1?

2009-08-03 Thread Yauheni Labko
Thank you for the answer, Michael.

As far as I understood clean_star=1 is absolutely ok for GFS/GFS2?

CTDB is not going to work without Red Hat Cluster manage. CMAN starts 
dlm_controld and gfs_controld. ccsd handles node-to-node communication. I 
think GPFS has the similar manager like CMAN. The clean_start=1 is the only 
setting which can provide the necessary access to the GFS/GFS2 partitions as 
CTDB required. Correct me if I'm wrong.

Btw, i thought OCFS2 is not ready to use with CTDB due to the lacks of some 
features. This was primary reason why I started  with GFS.

I left manual fencing for testing only. I was going to use iLO in production.

Yauheni Labko (Eugene Lobko)
Junior System Administrator
Chapdelaine  Co.
(212)208-9150

 CTDB is pretty ignorant of CMAN as such.
 It just relies on a cluster file system, like GFS2.


 So you should only start ctdbd when the cluster is up
 and the gfs2 file system is mounted. I think you should
 not start ctdbd as a cluster service managed by cman,
 since ctdbd can be considered a cluster manager for
 certain services (like samba...) itself. Apart from
 that, ctdb should be considered pretty much independent
 of the red hat cluster manager.

 CTDB needs a file in the cluster file system, the
 recovery lock file. The location of this file (or a
 directory, in which such a file can be created) should
 be specified in the CTDB_RECOVERY_LOCK=... setting
 in /etc/sysconfig/ctdb.

 At a glance, your cluster.conf looks sane, but
 I think manual fencing can be a real problem with
 cman.

 GPFS is very well tested with ctdb.
 I think there are many people testing ctdb with gfs2.
 I have heard positive feedback of people using ctdb
 with GlusterFS and lustre (and recently with ocfs2).

 You might want to join the #ctdb irc channel on freenode.
 There are ususally some people around with more expertise
 in gfs2 than me.

 Cheers - Michael

 Yauheni Labko wrote:
  Hi everybody,
 
  I have tested CTDB+GFS2+CMAN under Debian. It works good but I do not
  understand some points.
  It is possible to run the CTDB defining it under services section in
  cluster.conf but running it on the second node shuts down the process at
  the first one. My CTDB configuration implies 2 active-active nodes.
 
  Does CTDB care if the node starts with clean_start=0 or
  clean_start=1? man fenced says this is a safe way especially during
  startup because it prevents a data corruption if a node was dead for some
  reason. From my understanding CTDB uses CMAN only as module to get
  access to gfs/gfs2 partitions. Or maybe it is better to look at GPFS and
  LustreFS?
 
  Could anybody show the working configuration of cluster.conf for
  CTDB+GFS2+CMAN?
 
  I used the following cluster.conf and ctd conf:
 
  ?xml version=1.0?
  cluster name=smb-cluster config_version=8
fence_daemon clean_start=0 post_fail_delay=0 post_join_delay=3/
cman expected_votes=1 two_node=1/
cman cluster_id=101/
clusternodes
  clusternode name=smb01 votes=1 nodeid=1
fence
  !-- Handle fencing manually --
  method name=human
device name=human nodename=smb01/
  /method
/fence
  /clusternode
  clusternode name=smb02 votes=1 nodeid=2
fence
  !-- Handle fencing manually --
  method name=human
device name=human nodename=smb02/
  /method
/fence
  /clusternode
/clusternodes
fencedevices
  !-- Define manual fencing --
  fencedevice name=human agent=fence_manual/
  !-- Define ilo fencing --
  fencedevice name=ilo agent=fence_ilo login=admin
  password=foo/ /fencedevices
  /cluster
 
  # Options to ctdbd. This is read by /etc/init.d/ctdb
  CTDB_RECOVERY_LOCK=/smb-ctdb/.ctdb_locking
  CTDB_PUBLIC_INTERFACE=eth2
  CTDB_PUBLIC_ADDRESSES=/etc/ctdb/public_addresses
  CTDB_MANAGES_SAMBA=yes
  CTDB_INIT_STYLE=ubuntu
  CTDB_NODES=/etc/ctdb/nodes
  CTDB_NOTIFY_SCRIPT=/etc/ctdb/notify.sh
  CTDB_DBDIR=/var/ctdb
  CTDB_DBDIR_PERSISTENT=/var/ctdb/persistent
  CTDB_SOCKET=/tmp/ctdb.socket
  CTDB_LOGFILE=/var/log/ctdb.log
  CTDB_DEBUGLEVEL=2
 
  Yauheni Labko (Eugene Lobko)
  Junior System Administrator
  Chapdelaine  Co.
  (212)208-9150


-- 
To unsubscribe from this list go to the following URL and read the
instructions:  https://lists.samba.org/mailman/options/samba