On 1/30/14, 9:21 PM, "Peter Mistich" <peter.mist...@rackspace.com> wrote:

>hello,
>
>anyone here can answer a questions about OST Failover Configuration
>(Active/Active) I think I understand but want to make sure.
>
>I configure 2 oss  servernames = node1 and node2  with 2 shared drives
>/dev/sdb and /dev/sdc and  on node1
>
>I run the command on node1 mkfs.lustre --fsname=testfs --ost
>--failnode=node2 --mgsnode=msg /dev/sdb
>
>I run the command on node2 mkfs.lustre --fsname=testfs --ost
>--failnode=node1 --mgsnode=msg /dev/sdc
>
>I mount  /dev/sdb on node 1 and  mount /dev/sdc on node2
>
>if node1 fails then I just mount  /dev/sdb on node2 and that is how
>active/active works
>
>is this correct ?

It is correct for Lustre, as stated in other replies you need some more
goop for a full solution.
However, ‹-failnode is depreciated. It is better to format using the
--servicenode option, listing both the primary and secondary nodes:
mkfs.lustre   --fsname=testfs -ost --servicenode=node1@NID
--servicenode=node2@NID ŠŠ

That removes the first-mount issue present with -‹fail node.

Also, you can use Œtuners.lustre -‹print to verify failover NIDs are
correct on disk.
cliffw

>
>Thanks,
>Pete
>_______________________________________________
>Lustre-discuss mailing list
>Lustre-discuss@lists.lustre.org
>http://lists.lustre.org/mailman/listinfo/lustre-discuss
>


_______________________________________________
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss

Reply via email to