thanks pavlos, perhaps i'll recheck my configuration... missing something i guess.
On Mon, Dec 13, 2010 at 4:49 AM, Pavlos Parissis <[email protected]>wrote: > On 11 December 2010 16:48, Pavlos Parissis <[email protected]> > wrote: > > On 9 December 2010 09:44, Linux Cook <[email protected]> wrote: > >> Hi! > >> > >> I have a lsb resource not running during bootup but successfully runs > after > >> issueing command: > > Any application that is under cluster control should start only by the > > cluster and not by the system via init process > > > >> > >> crm resource start <resourcename> > >> > >> Here's my config: > >> > >> primitive nuxeoctl lsb:nuxeoctl \ > >> op monitor interval="30s" timeout="300" depth="0" \ > >> op start interval="0" timeout="300s" \ > >> op stop interval="0" timeout="300s" > >> primitive data_fs ocf:heartbeat:Filesystem \ > >> params device="/dev/mapper/data" directory="/var/lib/data" > >> fstype="ext3" > >> clone datafsClone data_fs > > how is it possible to mount the same FS on both systems? Do you have > > dual primary DRBD or other cluster FS deployed? > > Ignore the above statement, spending to many hours HA filesystems and > for some reason you always think that cluster works only with DRBD and > etc > >> clone nuxeoClone nuxeoctl > >> colocation colNuxeowithDataFS inf: nuxeoClone datafsClone > > you don't need that because the cloned primitives will run on all > > systems anyways > >> order ordNuxeoafterDataFS inf: datafsClone nuxeoClone > > you better set the start action on nuceoClone, so it should be > > order ordNuxeoafterDataFS inf: datafsClone nuxeoClone:start > >> > >> Anything I missed out? By the way, the lsb resource is an application > based > >> on jboss AS. > > > > You didn't say what is the problem but I assumed your problem is that > > pacemaker doesn't start your clones, right? > > I managed to simulate what you have and for me it starts the clone on boot > > r...@node-02:~# crm_mon -1 > ============ > Last updated: Mon Dec 13 13:47:35 2010 > Stack: Heartbeat > Current DC: node-02 (07b89fad-0626-480d-8660-238f9372bc4b) - partition > with quorum > Version: 1.0.9-74392a28b7f31d7ddc86689598bd23114f58978b > 2 Nodes configured, unknown expected votes > 2 Resources configured. > ============ > > Online: [ node-02 node-01 ] > > Clone Set: clo-dummy-02 > Started: [ node-02 node-01 ] > Clone Set: clo-dummy-01 > Started: [ node-02 node-01 ] > > I use dummies in my conf but the logic is same > > r...@node-02:~# crm configure show > node $id="07b89fad-0626-480d-8660-238f9372bc4b" node-02 > node $id="954d162e-0d7b-4e28-a6e3-8d6eee73034e" node-01 > primitive dummy01 ocf:heartbeat:Dummy \ > op monitor interval="40" timeout="30" > primitive dummy02 ocf:heartbeat:Dummy \ > op monitor interval="40" timeout="30" > clone clo-dummy-01 dummy01 > clone clo-dummy-02 dummy02 > property $id="cib-bootstrap-options" \ > dc-version="1.0.9-74392a28b7f31d7ddc86689598bd23114f58978b" \ > cluster-infrastructure="Heartbeat" \ > stonith-enabled="false" > r...@node-02:~# > _______________________________________________ Linux-HA mailing list [email protected] http://lists.linux-ha.org/mailman/listinfo/linux-ha See also: http://linux-ha.org/ReportingProblems
