Re: [DRBD-user] 4-way replication

2012-01-17 Thread Andreas Kurz
Hello,

On 01/16/2012 11:05 AM, Benjamin Knoth wrote:
 Hello,
 i think it's clear and the pacemaker config is also clear but i can't
 get a positiv result.
 
 I started with 4 maschines
 
 vm01 vm02 vm03 vm04
 
 On vm01 and vm02 i created a DRBD resource with this config.
 
 resource test {
 device /dev/drbd3;
 meta-disk internal;
 disk /dev/vg01/test;
 protocolC;
 
 
 syncer{
 rate 800M;
 }
 
 on vm01 {
 address 10.10.255.12:7003;
 }
 
 on vm02 {
 address 10.10.255.13:7003;
 }
 }
 
 On vm03 and vm04 i created this DRBD Resource
 
 resource test2 {
 device /dev/drbd3;
 meta-disk internal;
 disk /dev/vg01/test;
 protocolC;
 
 
 syncer{
 rate 800M;
 }
 
 on vm03 {
 address 10.10.255.14:7003;
 }
 
 on vm04 {
 address 10.10.255.15:7003;
 }
 }
 
 This two unstacked resources are running.
 
 If i look in the documentation i think that i need to create the
 following DRBD Resource on vm01-04.
 
 resource stest {
 protocolA;
 stacked-on-top-of test2 {
 device /dev/drbd13;
 address 10.10.255.16:7009;
 }
 stacked-on-top-of test {
 device /dev/drbd13;
 address 10.10.255.17:7009;
 }
 }
 
 But if i save this and copy them to all vms i get on vm03-04 if i run
 drbdadm --stacked create-md stest

Use the same, complete config on all nodes.

Regards,
Andreas

-- 
Need help with DRBD?
http://www.hastexo.com/now

 
 drbd.d/stest.res:1: in resource stest, referenced resource 'test' not
 defined.
 
 and vm01-02 on
 
 drbd.d/stest.res:1: in resource stest, referenced resource 'test2' not
 defined.
 
 What do i need that vm01-02 know about test2 on vm03-04 and vm03-04 know
 about test on vm01-02?
 
 Both ip addresses are virtual adresses on vm01 and vm03 where test and
 test2 are primary
 
 That is what i understood after i look on the picture and the pacemaker
 configuration.
 
 Best regards
 
 Benjamin
 
 Am 13.01.2012 15:27, schrieb Andreas Kurz:
 Hello,

 On 01/13/2012 12:56 PM, Benjamin Knoth wrote:
 Hi,

 i will create a 4 node replication with DRBD.
 I read also the documentation.
 I understand also the configuration of a 3 way replication, but how do i
 need to config the 4 way replication?

 I configured 2 2way resources successfully and now i need to config the
 stacked resource.

 Have a look at:

 http://www.drbd.org/users-guide-8.3/s-pacemaker-stacked-resources.html#s-pacemaker-stacked-dr

 ... a picture says more than 1000 words ;-)


 resource r0-U {
 {
 protocol A;
 }
 stacked-on-top-of r0 {
 device
 /dev/drbd10;
 address
 192.168.42.1:7788;
 }
 on charlie {
 device /dev/drbd10;
 disk /dev/hda6;
 address 192.168.42.2:7788; # Public IP of the backup
 meta-disk internal;
 }

 }


 Is the solution to define on server alice and bob and charlie and daisy
 a lower level resource with protoc C and than one stacked resource where
 directly the stacked resource from alice and bob communicate with the
 stacked resource of charlie and daisy like this configuration?

 Yes, configure the replication between two stacked resources.


 resource stacked {
 protocolA;
 stacked-on-top-of r0 {
 device /dev/drbd10;
 address 192.168.:7788;
 }
 stacked-on-top-of r0 {
 device /dev/drbd10;
 address 134.76.28.188:7788;
 }
 }

 Best regards

 Regards,
 Andreas




 ___
 drbd-user mailing list
 drbd-user@lists.linbit.com
 http://lists.linbit.com/mailman/listinfo/drbd-user
 
 
 
 
 
 ___
 drbd-user mailing list
 drbd-user@lists.linbit.com
 http://lists.linbit.com/mailman/listinfo/drbd-user





signature.asc
Description: OpenPGP digital signature
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user


Re: [DRBD-user] 4-way replication

2012-01-17 Thread Benjamin Knoth
Hello Andreas,

Am 17.01.2012 14:51, schrieb Andreas Kurz:
 Hello,
 
 On 01/16/2012 11:05 AM, Benjamin Knoth wrote:
 Hello,
 i think it's clear and the pacemaker config is also clear but i can't
 get a positiv result.

 I started with 4 maschines

 vm01 vm02 vm03 vm04

 On vm01 and vm02 i created a DRBD resource with this config.

 resource test {
 device /dev/drbd3;
 meta-disk internal;
 disk /dev/vg01/test;
 protocolC;


 syncer{
 rate 800M;
 }

 on vm01 {
 address 10.10.255.12:7003;
 }

 on vm02 {
 address 10.10.255.13:7003;
 }
 }

 On vm03 and vm04 i created this DRBD Resource

 resource test2 {
 device /dev/drbd3;
 meta-disk internal;
 disk /dev/vg01/test;
 protocolC;


 syncer{
 rate 800M;
 }

 on vm03 {
 address 10.10.255.14:7003;
 }

 on vm04 {
 address 10.10.255.15:7003;
 }
 }

 This two unstacked resources are running.

 If i look in the documentation i think that i need to create the
 following DRBD Resource on vm01-04.

 resource stest {
 protocolA;
 stacked-on-top-of test2 {
 device /dev/drbd13;
 address 10.10.255.16:7009;
 }
 stacked-on-top-of test {
 device /dev/drbd13;
 address 10.10.255.17:7009;
 }
 }

 But if i save this and copy them to all vms i get on vm03-04 if i run
 drbdadm --stacked create-md stest
 
 Use the same, complete config on all nodes.
I copied this config on all nodes.

Best regards

Benjamin

 
 Regards,
 Andreas
 
 
 
 
 ___
 drbd-user mailing list
 drbd-user@lists.linbit.com
 http://lists.linbit.com/mailman/listinfo/drbd-user

-- 
Benjamin Knoth
Max Planck Digital Library (MPDL)
Systemadministration
Amalienstrasse 33
80799 Munich, Germany
http://www.mpdl.mpg.de

Mail: kn...@mpdl.mpg.de
Phone:  +49 89 38602 228
Fax:+49-89-38602-280



smime.p7s
Description: S/MIME Kryptografische Unterschrift
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user


Re: [DRBD-user] 4-way replication

2012-01-17 Thread Andreas Kurz
On 01/17/2012 03:02 PM, Benjamin Knoth wrote:
 Hello Andreas,
 
 Am 17.01.2012 14:51, schrieb Andreas Kurz:
 Hello,

 On 01/16/2012 11:05 AM, Benjamin Knoth wrote:
 Hello,
 i think it's clear and the pacemaker config is also clear but i can't
 get a positiv result.

 I started with 4 maschines

 vm01 vm02 vm03 vm04

 On vm01 and vm02 i created a DRBD resource with this config.

 resource test {
 device /dev/drbd3;
 meta-disk internal;
 disk /dev/vg01/test;
 protocolC;


 syncer{
 rate 800M;
 }

 on vm01 {
 address 10.10.255.12:7003;
 }

 on vm02 {
 address 10.10.255.13:7003;
 }
 }

 On vm03 and vm04 i created this DRBD Resource

 resource test2 {
 device /dev/drbd3;
 meta-disk internal;
 disk /dev/vg01/test;
 protocolC;


 syncer{
 rate 800M;
 }

 on vm03 {
 address 10.10.255.14:7003;
 }

 on vm04 {
 address 10.10.255.15:7003;
 }
 }

 This two unstacked resources are running.

 If i look in the documentation i think that i need to create the
 following DRBD Resource on vm01-04.

 resource stest {
 protocolA;
 stacked-on-top-of test2 {
 device /dev/drbd13;
 address 10.10.255.16:7009;
 }
 stacked-on-top-of test {
 device /dev/drbd13;
 address 10.10.255.17:7009;
 }
 }

 But if i save this and copy them to all vms i get on vm03-04 if i run
 drbdadm --stacked create-md stest

 Use the same, complete config on all nodes.
 I copied this config on all nodes.

And still not working? Can you provide or pastebin drbdadm dump all
and cat /proc/drbd from a node that gives you that error?

Regards,
Andreas

-- 
Need help with DRBD?
http://www.hastexo.com/now

 
 Best regards
 
 Benjamin
 

 Regards,
 Andreas




 ___
 drbd-user mailing list
 drbd-user@lists.linbit.com
 http://lists.linbit.com/mailman/listinfo/drbd-user
 
 
 
 ___
 drbd-user mailing list
 drbd-user@lists.linbit.com
 http://lists.linbit.com/mailman/listinfo/drbd-user




signature.asc
Description: OpenPGP digital signature
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user


Re: [DRBD-user] 4-way replication

2012-01-17 Thread Jake Smith
- Original Message -
 From: Andreas Kurz andr...@hastexo.com
 To: drbd-user@lists.linbit.com
 Sent: Tuesday, January 17, 2012 9:36:30 AM
 Subject: Re: [DRBD-user] 4-way replication
 
 On 01/17/2012 03:02 PM, Benjamin Knoth wrote:
  Hello Andreas,
  
  Am 17.01.2012 14:51, schrieb Andreas Kurz:
  Hello,
 
  On 01/16/2012 11:05 AM, Benjamin Knoth wrote:
  Hello,
  i think it's clear and the pacemaker config is also clear but i
  can't
  get a positiv result.
 
  I started with 4 maschines
 
  vm01 vm02 vm03 vm04
 
  On vm01 and vm02 i created a DRBD resource with this config.
 
  resource test {
  device /dev/drbd3;
  meta-disk internal;
  disk /dev/vg01/test;
  protocolC;
 
 
  syncer{
  rate 800M;
  }
 
  on vm01 {
  address 10.10.255.12:7003;
  }
 
  on vm02 {
  address 10.10.255.13:7003;
  }
  }
 
  On vm03 and vm04 i created this DRBD Resource
 
  resource test2 {
  device /dev/drbd3;
  meta-disk internal;
  disk /dev/vg01/test;
  protocolC;
 
 
  syncer{
  rate 800M;
  }
 
  on vm03 {
  address 10.10.255.14:7003;
  }
 
  on vm04 {
  address 10.10.255.15:7003;
  }
  }
 
  This two unstacked resources are running.
 
  If i look in the documentation i think that i need to create the
  following DRBD Resource on vm01-04.
 
  resource stest {
  protocolA;
  stacked-on-top-of test2 {
  device /dev/drbd13;
  address 10.10.255.16:7009;
  }
  stacked-on-top-of test {
  device /dev/drbd13;
  address 10.10.255.17:7009;
  }
  }
 
  But if i save this and copy them to all vms i get on vm03-04 if i
  run
  drbdadm --stacked create-md stest
 
  Use the same, complete config on all nodes.
  I copied this config on all nodes.

You did combine the config that had test and the config that had test2 into one 
correct? (or have all 3 configs on each node if you have each resource in it's 
own config file)

So test, test2, and stest all exist in the config on vm01-04?

 
 And still not working? Can you provide or pastebin drbdadm dump all
 and cat /proc/drbd from a node that gives you that error?
 
 Regards,
 Andreas
 
 --
 Need help with DRBD?
 http://www.hastexo.com/now
 
  
  Best regards
  
  Benjamin
  
 
  Regards,
  Andreas
 
 
 
 
  ___
  drbd-user mailing list
  drbd-user@lists.linbit.com
  http://lists.linbit.com/mailman/listinfo/drbd-user
  
  
  
  ___
  drbd-user mailing list
  drbd-user@lists.linbit.com
  http://lists.linbit.com/mailman/listinfo/drbd-user
 
 
 
 ___
 drbd-user mailing list
 drbd-user@lists.linbit.com
 http://lists.linbit.com/mailman/listinfo/drbd-user
 
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user


Re: [DRBD-user] 4-way replication

2012-01-17 Thread Benjamin Knoth
Hello,

Am 17.01.2012 15:36, schrieb Andreas Kurz:
 On 01/17/2012 03:02 PM, Benjamin Knoth wrote:
 Hello Andreas,

 Am 17.01.2012 14:51, schrieb Andreas Kurz:
 Hello,

 On 01/16/2012 11:05 AM, Benjamin Knoth wrote:
 Hello,
 i think it's clear and the pacemaker config is also clear but i can't
 get a positiv result.

 I started with 4 maschines

 vm01 vm02 vm03 vm04

 On vm01 and vm02 i created a DRBD resource with this config.

 resource test {
 device /dev/drbd3;
 meta-disk internal;
 disk /dev/vg01/test;
 protocolC;


 syncer{
 rate 800M;
 }

 on vm01 {
 address 10.10.255.12:7003;
 }

 on vm02 {
 address 10.10.255.13:7003;
 }
 }

 On vm03 and vm04 i created this DRBD Resource

 resource test2 {
 device /dev/drbd3;
 meta-disk internal;
 disk /dev/vg01/test;
 protocolC;


 syncer{
 rate 800M;
 }

 on vm03 {
 address 10.10.255.14:7003;
 }

 on vm04 {
 address 10.10.255.15:7003;
 }
 }

 This two unstacked resources are running.

 If i look in the documentation i think that i need to create the
 following DRBD Resource on vm01-04.

 resource stest {
 protocolA;
 stacked-on-top-of test2 {
 device /dev/drbd13;
 address 10.10.255.16:7009;
 }
 stacked-on-top-of test {
 device /dev/drbd13;
 address 10.10.255.17:7009;
 }
 }

 But if i save this and copy them to all vms i get on vm03-04 if i run
 drbdadm --stacked create-md stest

 Use the same, complete config on all nodes.
 I copied this config on all nodes.
 
 And still not working? Can you provide or pastebin drbdadm dump all
 and cat /proc/drbd from a node that gives you that error?

on vm01 and vm02 i get for resource test on cat /proc/drbd. The not
stacked resource works

 3: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r-
ns:1077644 nr:0 dw:33232 dr:1044968 al:13 bm:63 lo:0 pe:0 ua:0 ap:0
ep:1 wo:b oos:0

After i copied the config with resource stest to all 4 nodes i get the
following on vm01 and vm02.

drbdadm dump all

drbd.d/stest.res:1: in resource stest, referenced resource 'test2' not
defined.

And cat /proc/drbd display only the unstacked test resource

  3: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r-
ns:0 nr:0 dw:0 dr:528 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:0

On vm03 and vm04 i can't also find a stacked resource in /proc/drbd

 3: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r-
ns:0 nr:0 dw:0 dr:536 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:0

drbdadm dump all
drbd.d/stest.res:1: in resource stest, referenced resource 'test' not
defined.

You see that on the referenced resource are different between vm01-02
and vm03-04. On the example the unstacked resources had also different
names. In this part DRBD need to know that the referenced resource test
is also available on vm01-02 and test2 is only available on vm03-04.
That is the problem what i need to solve or not?

Best regards
Benjamin

 
 Regards,
 Andreas
 
 
 
 
 ___
 drbd-user mailing list
 drbd-user@lists.linbit.com
 http://lists.linbit.com/mailman/listinfo/drbd-user




smime.p7s
Description: S/MIME Kryptografische Unterschrift
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user


Re: [DRBD-user] 4-way replication

2012-01-17 Thread Benjamin Knoth
Hi Jake,

Am 17.01.2012 15:41, schrieb Jake Smith:
 - Original Message -
 From: Andreas Kurz andr...@hastexo.com
 To: drbd-user@lists.linbit.com
 Sent: Tuesday, January 17, 2012 9:36:30 AM
 Subject: Re: [DRBD-user] 4-way replication

 On 01/17/2012 03:02 PM, Benjamin Knoth wrote:
 Hello Andreas,

 Am 17.01.2012 14:51, schrieb Andreas Kurz:
 Hello,

 On 01/16/2012 11:05 AM, Benjamin Knoth wrote:
 Hello,
 i think it's clear and the pacemaker config is also clear but i
 can't
 get a positiv result.

 I started with 4 maschines

 vm01 vm02 vm03 vm04

 On vm01 and vm02 i created a DRBD resource with this config.

 resource test {
 device /dev/drbd3;
 meta-disk internal;
 disk /dev/vg01/test;
 protocolC;


 syncer{
 rate 800M;
 }

 on vm01 {
 address 10.10.255.12:7003;
 }

 on vm02 {
 address 10.10.255.13:7003;
 }
 }

 On vm03 and vm04 i created this DRBD Resource

 resource test2 {
 device /dev/drbd3;
 meta-disk internal;
 disk /dev/vg01/test;
 protocolC;


 syncer{
 rate 800M;
 }

 on vm03 {
 address 10.10.255.14:7003;
 }

 on vm04 {
 address 10.10.255.15:7003;
 }
 }

 This two unstacked resources are running.

 If i look in the documentation i think that i need to create the
 following DRBD Resource on vm01-04.

 resource stest {
 protocolA;
 stacked-on-top-of test2 {
 device /dev/drbd13;
 address 10.10.255.16:7009;
 }
 stacked-on-top-of test {
 device /dev/drbd13;
 address 10.10.255.17:7009;
 }
 }

 But if i save this and copy them to all vms i get on vm03-04 if i
 run
 drbdadm --stacked create-md stest

 Use the same, complete config on all nodes.
 I copied this config on all nodes.
 
 You did combine the config that had test and the config that had test2 into 
 one correct? (or have all 3 configs on each node if you have each resource in 
 it's own config file)
 
 So test, test2, and stest all exist in the config on vm01-04?

No, stest is on all 4 nodes vm01-04, but test is only available on
vm01-02 and test2 only available on vm03-04.

Best regards

Benjamin

 

 And still not working? Can you provide or pastebin drbdadm dump all
 and cat /proc/drbd from a node that gives you that error?

 Regards,
 Andreas

 --
 Need help with DRBD?
 http://www.hastexo.com/now


 Best regards

 Benjamin


 Regards,
 Andreas




 ___
 drbd-user mailing list
 drbd-user@lists.linbit.com
 http://lists.linbit.com/mailman/listinfo/drbd-user



 ___
 drbd-user mailing list
 drbd-user@lists.linbit.com
 http://lists.linbit.com/mailman/listinfo/drbd-user



 ___
 drbd-user mailing list
 drbd-user@lists.linbit.com
 http://lists.linbit.com/mailman/listinfo/drbd-user

 ___
 drbd-user mailing list
 drbd-user@lists.linbit.com
 http://lists.linbit.com/mailman/listinfo/drbd-user

-- 
Benjamin Knoth
Max Planck Digital Library (MPDL)
Systemadministration
Amalienstrasse 33
80799 Munich, Germany
http://www.mpdl.mpg.de

Mail: kn...@mpdl.mpg.de
Phone:  +49 89 38602 228
Fax:+49-89-38602-280



smime.p7s
Description: S/MIME Kryptografische Unterschrift
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user


Re: [DRBD-user] 4-way replication

2012-01-17 Thread Andreas Kurz
Hello,

On 01/17/2012 04:22 PM, Benjamin Knoth wrote:
 Hello,
 
 Am 17.01.2012 15:36, schrieb Andreas Kurz:
 On 01/17/2012 03:02 PM, Benjamin Knoth wrote:
 Hello Andreas,

 Am 17.01.2012 14:51, schrieb Andreas Kurz:
 Hello,

 On 01/16/2012 11:05 AM, Benjamin Knoth wrote:
 Hello,
 i think it's clear and the pacemaker config is also clear but i can't
 get a positiv result.

 I started with 4 maschines

 vm01 vm02 vm03 vm04

 On vm01 and vm02 i created a DRBD resource with this config.

 resource test {
 device /dev/drbd3;
 meta-disk internal;
 disk /dev/vg01/test;
 protocolC;


 syncer{
 rate 800M;
 }

 on vm01 {
 address 10.10.255.12:7003;
 }

 on vm02 {
 address 10.10.255.13:7003;
 }
 }

 On vm03 and vm04 i created this DRBD Resource

 resource test2 {
 device /dev/drbd3;
 meta-disk internal;
 disk /dev/vg01/test;
 protocolC;


 syncer{
 rate 800M;
 }

 on vm03 {
 address 10.10.255.14:7003;
 }

 on vm04 {
 address 10.10.255.15:7003;
 }
 }

 This two unstacked resources are running.

 If i look in the documentation i think that i need to create the
 following DRBD Resource on vm01-04.

 resource stest {
 protocolA;
 stacked-on-top-of test2 {
 device /dev/drbd13;
 address 10.10.255.16:7009;
 }
 stacked-on-top-of test {
 device /dev/drbd13;
 address 10.10.255.17:7009;
 }
 }

 But if i save this and copy them to all vms i get on vm03-04 if i run
 drbdadm --stacked create-md stest

 Use the same, complete config on all nodes.
 I copied this config on all nodes.

 And still not working? Can you provide or pastebin drbdadm dump all
 and cat /proc/drbd from a node that gives you that error?
 
 on vm01 and vm02 i get for resource test on cat /proc/drbd. The not
 stacked resource works
 
  3: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r-
 ns:1077644 nr:0 dw:33232 dr:1044968 al:13 bm:63 lo:0 pe:0 ua:0 ap:0
 ep:1 wo:b oos:0
 
 After i copied the config with resource stest to all 4 nodes i get the
 following on vm01 and vm02.
 
 drbdadm dump all
 
 drbd.d/stest.res:1: in resource stest, referenced resource 'test2' not
 defined.
 
 And cat /proc/drbd display only the unstacked test resource
 
   3: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r-
 ns:0 nr:0 dw:0 dr:528 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:0
 
 On vm03 and vm04 i can't also find a stacked resource in /proc/drbd
 
  3: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r-
 ns:0 nr:0 dw:0 dr:536 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:0
 
 drbdadm dump all
 drbd.d/stest.res:1: in resource stest, referenced resource 'test' not
 defined.
 
 You see that on the referenced resource are different between vm01-02
 and vm03-04. On the example the unstacked resources had also different
 names. In this part DRBD need to know that the referenced resource test
 is also available on vm01-02 and test2 is only available on vm03-04.
 That is the problem what i need to solve or not?

Yes ... put _all_ resource configs on _all_ nodes (and include them in
your config of course): the same config on all four nodes

Regards,
Andreas

-- 
Need help with DRBD?
http://www.hastexo.com/now

 
 Best regards
 Benjamin
 

 Regards,
 Andreas




 ___
 drbd-user mailing list
 drbd-user@lists.linbit.com
 http://lists.linbit.com/mailman/listinfo/drbd-user
 
 
 
 
 ___
 drbd-user mailing list
 drbd-user@lists.linbit.com
 http://lists.linbit.com/mailman/listinfo/drbd-user




signature.asc
Description: OpenPGP digital signature
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user


Re: [DRBD-user] Dual-primary to single node

2012-01-17 Thread Andreas Kurz
Hello,

On 01/13/2012 10:59 AM, Luis M. Carril wrote:
 Hello,
 
I´m new to DRBD and I think that I have a mess with some concepts and
 policies.
 
I have setup a two node cluster (of virtual machines) with a shared
 volume in dual primary mode with ocfs2 as a basic infrastructure for
 some testings.
I need that when one of the two nodes goes down the other continues
 working normally (we can assume that the other node never will recover
 again), but when one node fails
the other enter in WFConnection state and the volume is disconnected,
 I have setup the standar set of policies for split brain:
 
 after-sb-0pri discard-zero-changes;
 after-sb-1pri discard-secondary;
 after-sb-2pri disconnect;
 
 
   Which policy should I use to achieve the desired behaivour (if one
 node fails, the other continue working alone)?

these policies only take affect if the two nodes see each other again
after a split-brain and if you loose one node it is correct behaviour
that the remaining node has it's DRBD resources in WFConnection state.

What do you mean with: volume is disconnected? How do you manage your
cluster? Pacemaker? rgmanager?

Without any further information on the rest of you setup and what you
think is not working correct it's unable to comment further ...

Regards,
Andreas

-- 
Need help with DRBD?
http://www.hastexo.com/now
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user


Re: [DRBD-user] 4-way replication

2012-01-17 Thread Jake Smith
What Andreas said :-)

- Original Message -
 From: Andreas Kurz andr...@hastexo.com
 To: drbd-user@lists.linbit.com
 Sent: Tuesday, January 17, 2012 10:33:35 AM
 Subject: Re: [DRBD-user] 4-way replication
 
 Hello,
 
 On 01/17/2012 04:22 PM, Benjamin Knoth wrote:
  Hello,
  
  Am 17.01.2012 15:36, schrieb Andreas Kurz:
  On 01/17/2012 03:02 PM, Benjamin Knoth wrote:
  Hello Andreas,
 
  Am 17.01.2012 14:51, schrieb Andreas Kurz:
  Hello,
 
  On 01/16/2012 11:05 AM, Benjamin Knoth wrote:
  Hello,
  i think it's clear and the pacemaker config is also clear but i
  can't
  get a positiv result.
 
  I started with 4 maschines
 
  vm01 vm02 vm03 vm04
 
  On vm01 and vm02 i created a DRBD resource with this config.
 
  resource test {
  device /dev/drbd3;
  meta-disk internal;
  disk /dev/vg01/test;
  protocolC;
 
 
  syncer{
  rate 800M;
  }
 
  on vm01 {
  address 10.10.255.12:7003;
  }
 
  on vm02 {
  address 10.10.255.13:7003;
  }
  }
 
  On vm03 and vm04 i created this DRBD Resource
 
  resource test2 {
  device /dev/drbd3;
  meta-disk internal;
  disk /dev/vg01/test;
  protocolC;
 
 
  syncer{
  rate 800M;
  }
 
  on vm03 {
  address 10.10.255.14:7003;
  }
 
  on vm04 {
  address 10.10.255.15:7003;
  }
  }
 
  This two unstacked resources are running.
 
  If i look in the documentation i think that i need to create
  the
  following DRBD Resource on vm01-04.
 
  resource stest {
  protocolA;
  stacked-on-top-of test2 {
  device /dev/drbd13;
  address 10.10.255.16:7009;
  }
  stacked-on-top-of test {
  device /dev/drbd13;
  address 10.10.255.17:7009;
  }
  }
 
  But if i save this and copy them to all vms i get on vm03-04 if
  i run
  drbdadm --stacked create-md stest
 
  Use the same, complete config on all nodes.
  I copied this config on all nodes.
 
  And still not working? Can you provide or pastebin drbdadm dump
  all
  and cat /proc/drbd from a node that gives you that error?
  
  on vm01 and vm02 i get for resource test on cat /proc/drbd. The not
  stacked resource works
  
   3: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r-
  ns:1077644 nr:0 dw:33232 dr:1044968 al:13 bm:63 lo:0 pe:0 ua:0
  ap:0
  ep:1 wo:b oos:0
  
  After i copied the config with resource stest to all 4 nodes i get
  the
  following on vm01 and vm02.
  
  drbdadm dump all
  
  drbd.d/stest.res:1: in resource stest, referenced resource 'test2'
  not
  defined.
  
  And cat /proc/drbd display only the unstacked test resource
  
3: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C
r-
  ns:0 nr:0 dw:0 dr:528 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b
  oos:0
  
  On vm03 and vm04 i can't also find a stacked resource in /proc/drbd
  
   3: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r-
  ns:0 nr:0 dw:0 dr:536 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b
  oos:0
  
  drbdadm dump all
  drbd.d/stest.res:1: in resource stest, referenced resource 'test'
  not
  defined.
  
  You see that on the referenced resource are different between
  vm01-02
  and vm03-04. On the example the unstacked resources had also
  different
  names. In this part DRBD need to know that the referenced resource
  test
  is also available on vm01-02 and test2 is only available on
  vm03-04.
  That is the problem what i need to solve or not?
 
 Yes ... put _all_ resource configs on _all_ nodes (and include them
 in
 your config of course): the same config on all four nodes
 
 Regards,
 Andreas
 
 --
 Need help with DRBD?
 http://www.hastexo.com/now
 
  
  Best regards
  Benjamin
  
 
  Regards,
  Andreas
 
 
 
 
  ___
  drbd-user mailing list
  drbd-user@lists.linbit.com
  http://lists.linbit.com/mailman/listinfo/drbd-user
  
  
  
  
  ___
  drbd-user mailing list
  drbd-user@lists.linbit.com
  http://lists.linbit.com/mailman/listinfo/drbd-user
 
 
 
 ___
 drbd-user mailing list
 drbd-user@lists.linbit.com
 http://lists.linbit.com/mailman/listinfo/drbd-user
 
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user


Re: [DRBD-user] Dual-primary to single node

2012-01-17 Thread Digimer
On 01/13/2012 04:59 AM, Luis M. Carril wrote:
 Hello,
 
I´m new to DRBD and I think that I have a mess with some concepts and
 policies.

Welcome! DRBD is a bit different from many storage concepts, so it takes
a bit to wrap your head around. However, careful not to overthink
things... It's fundamentally quite straight forward.

I have setup a two node cluster (of virtual machines) with a shared
 volume in dual primary mode with ocfs2 as a basic infrastructure for
 some testings.

Do you have fencing? Dual-primary can not operate safely without a
mechanism for ensuring the state of the remote node.

I need that when one of the two nodes goes down the other continues
 working normally (we can assume that the other node never will recover
 again), but when one node fails

The assumption that the other will never return is not a concept that
DRBD can assume. This is where fencing comes in... When a node loses
contact with it's peer, it has no way of knowing what state the remote
node is in. Is it still running, but thinks the local peer is gone? Is
the silent node hung, but might return at some point? Is the remote node
powered off?

The only think you know is what you don't know.

Consider;

Both nodes, had they simply assumed silence == death, go StandAlone
and Primary. During this time, data is written to either node but that
data is not replicated. Now you have divergent data and the only
mechanism to recover is to invalidate the changes on one of the nodes.
Data loss.

The solution is fencing and resource management, which is what Andreas
meant when he asked about pacemaker vs. rgmanager.

the other enter in WFConnection state and the volume is disconnected,
 I have setup the standar set of policies for split brain:
 
 after-sb-0pri discard-zero-changes;
 after-sb-1pri discard-secondary;
 after-sb-2pri disconnect;
 
   Which policy should I use to achieve the desired behaivour (if one
 node fails, the other continue working alone)?
 
 Regards

Again, as Andreas indicated, this controls the policy when comms are
lost (be it because of a network error, peer dieing/hanging, whatever).
It is by design that a node, after losing it's peer, goes into
WFConnection (waiting for connection). In this state, if/when the peer
recovers (as it often does with power fencing), the peer can
re-establish the connection, sync changes and return to a normal
operating state.

-- 
Digimer
E-Mail:  digi...@alteeve.com
Freenode handle: digimer
Papers and Projects: http://alteeve.com
Node Assassin:   http://nodeassassin.org
omg my singularity battery is dead again.
stupid hawking radiation. - epitron
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user


Re: [DRBD-user] Dual-primary to single node

2012-01-17 Thread CAMPBELL Robert
Luis,

I have experienced the same problems, what helped was to 'fence' the 
other node by forcing it into a reboot. I don't quite know why it worked 
(worrying) but I found that if I fenced the other node, I was not 
getting any more time-outs on the drbd block device, which I think is 
what you describe.

I am using HP servers with iLO3 and RHEL/CentOS, so I used the 
fence_ipmi script included with the CentOS 6.1 distribution (do not 
forget to attach your fence devices to the correct nodes).

Hope it helps!

Robert

On 13-1-2012 10:59, Luis M. Carril wrote:
 Hello,

I´m new to DRBD and I think that I have a mess with some concepts 
 and policies.

I have setup a two node cluster (of virtual machines) with a shared 
 volume in dual primary mode with ocfs2 as a basic infrastructure for 
 some testings.
I need that when one of the two nodes goes down the other continues 
 working normally (we can assume that the other node never will recover 
 again), but when one node fails
the other enter in WFConnection state and the volume is 
 disconnected, I have setup the standar set of policies for split brain:

 after-sb-0pri discard-zero-changes;
 after-sb-1pri discard-secondary;
 after-sb-2pri disconnect;


   Which policy should I use to achieve the desired behaivour (if one 
 node fails, the other continue working alone)?

 Regards





___
Help save paper! Do you really need to print this email?

Aan de inhoud van dit bericht kunnen alleen rechten ten opzichte van Morpho B.V.
worden ontleend, indien zij door rechtsgeldig ondertekende stukken worden 
ondersteund. 
De informatie in dit e-mailbericht is van vertrouwelijke aard en alleen bedoeld 
voor gebruik 
door geadresseerde. Als u een bericht onbedoeld heeft ontvangen, wordt u 
verzocht de
verzender hiervan in kennis te stellen en het bericht te vernietigen zonder te 
vermenigvuldigen
of andersoortig te gebruiken.

The contents of this electronic mail message are only binding upon Morpho B.V.
if the contents of the message are accompanied by a lawfully recognized type of
signature.  The contents of this electronic mail message are privileged and 
confidential and are
intended only for use by the addressee.  If you have received this electronic 
mail message by error,
please notify the sender and delete the message without reproducing it and 
using it in any way.

#
 Ce courriel et les documents qui lui sont joints peuvent contenir des 
informations confidentielles ou ayant un caractère privé. S'ils ne vous sont 
pas destinés, nous vous signalons qu'il est strictement interdit de les 
divulguer, de les reproduire ou d'en utiliser de quelque manière que ce soit le 
contenu. Si ce message vous a été transmis par erreur, merci d'en informer 
l'expéditeur et de supprimer immédiatement de votre système informatique ce 
courriel ainsi que tous les documents qui y sont attachés.
**
 This e-mail and any attached documents may contain confidential or 
proprietary information. If you are not the intended recipient, you are 
notified that any dissemination, copying of this e-mail and any attachments 
thereto or use of their contents by any means whatsoever is strictly 
prohibited. If you have received this e-mail in error, please advise the sender 
immediately and delete this e-mail and all attached documents from your 
computer system.
#

___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user


Re: [DRBD-user] 4-way replication

2012-01-17 Thread Benjamin Knoth
Thx a lot Andreas and Jake.
That was my problem, that all server the stacked config and both
unstacked configs need.

Now i will create the pacemaker configuration and than everything should
be ok.

Best regards

Benjamin

Am 17.01.2012 16:33, schrieb Andreas Kurz:
 Hello,
 
 On 01/17/2012 04:22 PM, Benjamin Knoth wrote:
 Hello,

 Am 17.01.2012 15:36, schrieb Andreas Kurz:
 On 01/17/2012 03:02 PM, Benjamin Knoth wrote:
 Hello Andreas,

 Am 17.01.2012 14:51, schrieb Andreas Kurz:
 Hello,

 On 01/16/2012 11:05 AM, Benjamin Knoth wrote:
 Hello,
 i think it's clear and the pacemaker config is also clear but i can't
 get a positiv result.

 I started with 4 maschines

 vm01 vm02 vm03 vm04

 On vm01 and vm02 i created a DRBD resource with this config.

 resource test {
 device /dev/drbd3;
 meta-disk internal;
 disk /dev/vg01/test;
 protocolC;


 syncer{
 rate 800M;
 }

 on vm01 {
 address 10.10.255.12:7003;
 }

 on vm02 {
 address 10.10.255.13:7003;
 }
 }

 On vm03 and vm04 i created this DRBD Resource

 resource test2 {
 device /dev/drbd3;
 meta-disk internal;
 disk /dev/vg01/test;
 protocolC;


 syncer{
 rate 800M;
 }

 on vm03 {
 address 10.10.255.14:7003;
 }

 on vm04 {
 address 10.10.255.15:7003;
 }
 }

 This two unstacked resources are running.

 If i look in the documentation i think that i need to create the
 following DRBD Resource on vm01-04.

 resource stest {
 protocolA;
 stacked-on-top-of test2 {
 device /dev/drbd13;
 address 10.10.255.16:7009;
 }
 stacked-on-top-of test {
 device /dev/drbd13;
 address 10.10.255.17:7009;
 }
 }

 But if i save this and copy them to all vms i get on vm03-04 if i run
 drbdadm --stacked create-md stest

 Use the same, complete config on all nodes.
 I copied this config on all nodes.

 And still not working? Can you provide or pastebin drbdadm dump all
 and cat /proc/drbd from a node that gives you that error?

 on vm01 and vm02 i get for resource test on cat /proc/drbd. The not
 stacked resource works

  3: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r-
 ns:1077644 nr:0 dw:33232 dr:1044968 al:13 bm:63 lo:0 pe:0 ua:0 ap:0
 ep:1 wo:b oos:0

 After i copied the config with resource stest to all 4 nodes i get the
 following on vm01 and vm02.

 drbdadm dump all

 drbd.d/stest.res:1: in resource stest, referenced resource 'test2' not
 defined.

 And cat /proc/drbd display only the unstacked test resource

   3: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r-
 ns:0 nr:0 dw:0 dr:528 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:0

 On vm03 and vm04 i can't also find a stacked resource in /proc/drbd

  3: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r-
 ns:0 nr:0 dw:0 dr:536 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:0

 drbdadm dump all
 drbd.d/stest.res:1: in resource stest, referenced resource 'test' not
 defined.

 You see that on the referenced resource are different between vm01-02
 and vm03-04. On the example the unstacked resources had also different
 names. In this part DRBD need to know that the referenced resource test
 is also available on vm01-02 and test2 is only available on vm03-04.
 That is the problem what i need to solve or not?
 
 Yes ... put _all_ resource configs on _all_ nodes (and include them in
 your config of course): the same config on all four nodes
 
 Regards,
 Andreas
 
 
 
 
 ___
 drbd-user mailing list
 drbd-user@lists.linbit.com
 http://lists.linbit.com/mailman/listinfo/drbd-user





smime.p7s
Description: S/MIME Kryptografische Unterschrift
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user


Re: [DRBD-user] Dual-primary to single node

2012-01-17 Thread Digimer
On 01/17/2012 11:07 AM, CAMPBELL Robert wrote:
 Luis,
 
 I have experienced the same problems, what helped was to 'fence' the 
 other node by forcing it into a reboot. I don't quite know why it worked 
 (worrying) but I found that if I fenced the other node, I was not 
 getting any more time-outs on the drbd block device, which I think is 
 what you describe.

https://alteeve.com/w/2-Node_Red_Hat_KVM_Cluster_Tutorial#Concept.3B_Fencing

:)

 I am using HP servers with iLO3 and RHEL/CentOS, so I used the 
 fence_ipmi script included with the CentOS 6.1 distribution (do not 
 forget to attach your fence devices to the correct nodes).
 
 Hope it helps!

One thing that Luis hasn't confirmed yet was what, if any, cluster stack
s/he was using with DRBD. fence_ipmilan works with both rgmanager (the
supported resource manager under EL6) and pacemaker (the
future-but-currently-tech-preview resource manager).

From DRBD's perspective, what matters is the fence-handler section. I
cover all of this in a KVM tutorial, but you can ignore that part and
hopefully the rest might help clear up some things.

https://alteeve.com/w/2-Node_Red_Hat_KVM_Cluster_Tutorial#Installing_DRBD

https://alteeve.com/w/2-Node_Red_Hat_KVM_Cluster_Tutorial#Configuring_DRBD

-- 
Digimer
E-Mail:  digi...@alteeve.com
Freenode handle: digimer
Papers and Projects: http://alteeve.com
Node Assassin:   http://nodeassassin.org
omg my singularity battery is dead again.
stupid hawking radiation. - epitron
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user


Re: [DRBD-user] Dual-primary to single node

2012-01-17 Thread Digimer
On 01/17/2012 12:32 PM, Luis M. Carril wrote:
 Hello,
 
  Ok, the fencing and splitbrain mechanisms only enter to play when
 both nodes meet again after some failure.
  So... meanwhile the nodes doesn´t connect their peer they disallow
 IO to the volume?
 
 Regards

No, if both nodes go Standalone and Primary, both will allow access to
the underlying storage, which results in a split brain. Fencing kills
one of the nodes (either the defective one or the slower one) preventing
it from changing it's underlying storage.

PS - Please reply to the list. These discussions help others later when
they are in the archives. :)

-- 
Digimer
E-Mail:  digi...@alteeve.com
Freenode handle: digimer
Papers and Projects: http://alteeve.com
Node Assassin:   http://nodeassassin.org
omg my singularity battery is dead again.
stupid hawking radiation. - epitron
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user


Re: [DRBD-user] Dual-primary to single node

2012-01-17 Thread Luis M. Carril


El 17/01/2012 18:56, Digimer escribió:

On 01/17/2012 12:32 PM, Luis M. Carril wrote:

Hello,

  Ok, the fencing and splitbrain mechanisms only enter to play when
both nodes meet again after some failure.
  So... meanwhile the nodes doesn´t connect their peer they disallow
IO to the volume?

Regards

No, if both nodes go Standalone and Primary, both will allow access to
the underlying storage, which results in a split brain. Fencing kills
one of the nodes (either the defective one or the slower one) preventing
it from changing it's underlying storage.
Umph, but actually I'm testing to drop one node meanwhile it is writing 
in the volume, and the volume in the surviving node is stalled 
(drbd-overview freezes, but /proc/drbd shows
that it is WTFConnection, Primary and Uptodate), even if I make drbdadm 
disconnect manually to make it go StandAlone, IO operations freeze on 
the directory.


Maybe is an issue related to OCFS...

Well my configurations are:

In DRBD

global {
usage-count no;
 }

common {
  protocol C;
  meta-disk internal;
  startup {
wfc-timeout  300;
degr-wfc-timeout 120;# 2 minutes.
become-primary-on both;
  }

  syncer {
rate 10M;
  }

  disk {
on-io-error   detresource r0 {
  startup {
become-primary-on both;
  }

  net {
allow-two-primaries;
after-sb-0pri disconnect;
after-sb-1pri disconnect;
after-sb-2pri disconnect;
  }

  on master {
device/dev/drbd1;
disk  /dev/xvde;
address   10.0.0.2:7789;
meta-disk internal;
  }
  on shadow {
device/dev/drbd1;
disk  /dev/xvde;
address   10.0.0.3:7789;
meta-disk internal;
  }
}ach;
  }
}

In OCFS2:
cluster:
node_count = 2
name = ocfs2
node:
ip_port = 
ip_address = 10.0.0.2
number = 0
name = master
cluster = ocfs2
node:
ip_port = 
ip_address = 10.0.0.3
number = 1
name = shadow
cluster = ocfs2

In debconf:
ocfs2-tools ocfs2-tools/idle_timeout  select 3
ocfs2-tools ocfs2-tools/reconnect_delay select 2000
ocfs2-tools ocfs2-tools/init select true
ocfs2-tools ocfs2-tools/clustername select ocfs2
ocfs2-tools ocfs2-tools/heartbeat_threshold select 31
ocfs2-tools ocfs2-tools/keepalive_delay select 2000






PS - Please reply to the list. These discussions help others later when
they are in the archives. :)


Sorry, my fault!

And thanks to all!
Regards

--
Luis M. Carril
Project Technician
Galicia Supercomputing Center (CESGA)
Avda. de Vigo s/n
15706 Santiago de Compostela
SPAIN

Tel: 34-981569810 ext 249
lmcar...@cesga.es
www.cesga.es


==

___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user


Re: [DRBD-user] Dual-primary to single node

2012-01-17 Thread Digimer
On 01/17/2012 01:09 PM, Luis M. Carril wrote:
 
 El 17/01/2012 18:56, Digimer escribió:
 On 01/17/2012 12:32 PM, Luis M. Carril wrote:
 Hello,

   Ok, the fencing and splitbrain mechanisms only enter to play when
 both nodes meet again after some failure.
   So... meanwhile the nodes doesn´t connect their peer they disallow
 IO to the volume?

 Regards
 No, if both nodes go Standalone and Primary, both will allow access to
 the underlying storage, which results in a split brain. Fencing kills
 one of the nodes (either the defective one or the slower one) preventing
 it from changing it's underlying storage.
 Umph, but actually I'm testing to drop one node meanwhile it is writing
 in the volume, and the volume in the surviving node is stalled
 (drbd-overview freezes, but /proc/drbd shows
 that it is WTFConnection, Primary and Uptodate), even if I make drbdadm
 disconnect manually to make it go StandAlone, IO operations freeze on
 the directory.
 
 Maybe is an issue related to OCFS...

Possibly, I use GFS2, not OCFS, so I can't speak to it's behaviour.

I can say though that GFS2 will also block when a node it the cluster
disappears, and it will remain blocked until it gets confirmation that
the lost node was fenced. This is by design, as a hung cluster is better
than a corrupted one. :)

-- 
Digimer
E-Mail:  digi...@alteeve.com
Freenode handle: digimer
Papers and Projects: http://alteeve.com
Node Assassin:   http://nodeassassin.org
omg my singularity battery is dead again.
stupid hawking radiation. - epitron
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user