and now the coin fell down.. Thanks!!

I create one block device on node 1.. Then I create a block device on the
other nodes which are mapped to that device.. I did that once before, but
still couldn't see any other hosts with sbd -d /dev/sdd list command.. But
now I know that you have to assign the nodes as well..

So now I just have to create another 2 devices the same way to get my 3
block devices

Cheers
/Fred

On Thu, Mar 28, 2013 at 12:52 PM, Ulrich Windl <
[email protected]> wrote:

> Hi!
>
> AFAIK, you'll have to have the same device name on every cluster node, and
> that one device should be actually be the same shared disk.
>
> Regards,
> Ulrich
>
> >>> Fredrik Hudner <[email protected]> schrieb am 28.03.2013 um
> 10:52
> in
> Nachricht
> <CAFtwCgP+k83ebWkTrQGjrSY8QLc2LWVNMYfFOE5yp6=u_8+...@mail.gmail.com>:
> > On Tue, Mar 26, 2013 at 4:42 PM, Lars Marowsky-Bree <[email protected]>
> wrote:
> >
> > > On 2013-03-26T08:10:06, Fredrik Hudner <[email protected]>
> wrote:
> > >
> > > > Hi,
> > > > I have a question about the setup sbd which I think belong to this
> forum
> > > >
> > > > I have 3 nodes. Two active pacemaker nodes and one kind of quorum
> node.
> > > > I would like to setup sbd as fencing device between these 3 nodes
> that
> > > are
> > > > running in VMware instances.
> > > >
> > > > Best demonstrated practise is the use of 3 devices (disks) but I'm
> not
> > > sure
> > > > about the actual setup.
> > >
> > > You must map the same 1 to 3 block devices to the VMs so that
> concurrent
> > > shared read/write access is allowed.
> > >
> > > Also, if the block devices are not independent but partitions on the
> > > same disk, that's not worth bothering with.
> > >
> > > If they're all on the same host, that's a bit pointless, but you knew
> > > that ;-)
> > >
> > > How are you backing the devices? SAN, FC, iSCSI? If it's iSCSI, you
> > > could decide to directly run the iSCSI initiator in the guest.
> > >
> > > >  I have read the http://linux-ha.org/wiki/SBD_Fencing but it only
> help
> > > me
> > > > so far
> > >
> > > That's because it describes how to configure sbd, not your storage
> layer
> > > ;-)
> > >
> >
> > Thanks Lars,
> >
> >
> > > The backing device is over SAN and the devices are all independent.
> > >
> >
> >
> > > So when you say, map the same 1 to 3 block devices my approach should
> be
> > > correct ?
> > >
> >
> >     1/ add 1 block device on each of the 3 nodes, e.g /dev/sdd, /dev/sde
> > and /dev/sdf
> >     2/ map each block device to the other VM's
> >
> >     When I run *# sbd -d /dev/sdd list *(same for all devices) it only
> > comes back with all the options I can use.
> >     I then run *# sbd -d /dev/sde allocate <nodename>* for each device
> and
> > I can see e.g
> >     on testclu01 (node1)
> >     *# sbd -d /dev/sde list
> > *
> > *    0     testclu02       test     testclu01*
> >     *but on /dev/sdd
> >     # sbd -d /dev/sdd list
> >     0     testclu01       clear*
> >     *# sbd -d /dev/sdf list*
> >     (empty)
> >
> >     on testclu03 (node3):
> >     *# sbd -d /dev/sde list*
> >     *0     testclu01       clear
> >     # sbd -d /dev/sdf list
> >     **0     testclu02       test     testclu01*
> >     *# sbd -d /dev/sdd list
> >     *(empty)*
> > *
> >     So I'm not sure if this is ok or not ?
> >
> >
> >
> > >  Architect Storage/HA
> > > SUSE LINUX Products GmbH, GF: Jeff Hawn, Jennifer Guild, Felix
> > > Imendörffer, HRB 21284 (AG Nürnberg)
> > > "Experience is the name everyone gives to their mistakes." -- Oscar
> Wilde
> > >
> > > _______________________________________________
> > > Linux-HA mailing list
> > > [email protected]
> > > http://lists.linux-ha.org/mailman/listinfo/linux-ha
> > > See also: http://linux-ha.org/ReportingProblems
> > >
> >
> >
>
>
>
>
> _______________________________________________
> Linux-HA mailing list
> [email protected]
> http://lists.linux-ha.org/mailman/listinfo/linux-ha
> See also: http://linux-ha.org/ReportingProblems




-- 
Fredrik Hudner
Grosse Pfahlstr 12
30161 Hannover
Germany

Tel: 0511-642 09 548
Mob: 0173-254 39 29
_______________________________________________
Linux-HA mailing list
[email protected]
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems

Reply via email to