[DRBD-user] Combining different versions (was: Unterschiedliche Versionen kombinieren?)

2011-07-29 Thread Daniel Meszaros

Hi!

Lars Ellenberg:

Digimer:

Daniel Meszaros:

Hi!

Wie klug oder dumm ist es, eine 8.3.11rc1-0 eines älteren Systems
(späterer Primary) und eine frisch installierte 8.3.11-0 (späterer
Secondary) zu kombinieren? Hintergrund: Für die Daten der unter der
älteren Version laufenden DRBD-Device fehlt das Backup. Ohne Backup will
ich aber nicht updaten. :-/

CU,
Mészi.

ごめんなさい、でも、このリストで英語を話すをください。ここの人はドイブツ
語と日本語も話せません。 ;)

@Digimer: Ooops. Sorry. Forgot that this list is international. :-D

My question was if some 8.3.11rc1-0 of an old system (shall become the 
new Primary) could be combined with a fresh installed 8.3.11-0 (the 
future Secondary). I could update the old system before but I am a bit 
afraid because it does not have a backup of what's on the DRBD device.



Very much so, essentially ;-)
Or simply Ja..
I can combine both? I'd like to update the old right in the moment when 
I know both DRBD nodes are in sync.



Besides, you should use 8.3.11-3, not -0
http://git.drbd.org/?p=drbd-8.3.git;a=shortlog;h=refs/heads/drbd-8.3.11-y

Probably a good idea to just upgrade both nodes to 8.3.11-3 at the same time.

But to sync it before, right?

CU,
Mészi.

___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user


Re: [DRBD-user] Recover primary node from disk failure.

2011-07-29 Thread Digimer
On 07/29/2011 07:46 AM, Caspar Smit wrote:
 Hi all,
 
 My primary node suffered a raid (md) failure and the drbd state is now:
 
 cs:Connected ro:Primary/Secondary ds:Diskless/UpToDate C r
 ns:76247848 nr:316725148 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0
 ep:1 wo:b oos:0
 
 So drbd is passing all IO to the second node.
 
 In the meantime I fixed the md and now i'm ready to attach the md again.
 
 But here's my question:
 
 Do i NEED a failover so the secondary node becomes primary/uptodate or
 can drbd handle the fact that when I attach the md again the primary
 becomes synctarget / inconsistent, will it stil pass all IO to the
 secondary node first until the primary is uptodate again?
 
 in short:
 
 Will drbdadm attach resource be sufficient to recover from
 Primary/Diskless or do I need to do a failover first?
 
 Met vriendelijke groet,

Simply reconnecting should be fine.

As I understand Protocol C, writes will go to both nodes, and reads will
come from the UpToDate node. The details are probably best answered by a
dev or Linbit person directly though, as I am somewhat guessing. I am
sure you don't need to fail-over though.

You may *want* to fail over though, if you can do so without downtime
(ie: live-migrate a VM, is that is what is using the DRBD storage). The
reason being that if anything happens to the communication with the
secondary, the primary will drop to Secondary until it's local storage
is UpToDate.

-- 
Digimer
E-Mail:  digi...@alteeve.com
Freenode handle: digimer
Papers and Projects: http://alteeve.com
Node Assassin:   http://nodeassassin.org
At what point did we forget that the Space Shuttle was, essentially,
a program that strapped human beings to an explosion and tried to stab
through the sky with fire and math?
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user


Re: [DRBD-user] rolling update from 8.3.11 to 8.4.0

2011-07-29 Thread Zev Weiss


On Jul 29, 2011, at 3:41 AM, Junko IKEDA wrote:


Hi,

I got it.
drbdadm wipe-md + create-md are needed.

0) start DRBD on 2 node, as Primary/Secnodary

on Secondary;
1) stop DRBD
# drbdadm down all
# service drbd stop

2) remove old DRBD
# rpm -e drbd-km-2.6.18_238.el5-8.3.11-1 drbd-pacemaker-8.3.11-1
drbd-utils-8.3.11-1

3) install new DRBD
# rpm -ihv drbd-utils-8.4.0-1.x86_64.rpm
drbd-km-2.6.18_238.el5-8.4.0-1.x86_64.rpm
drbd-pacemaker-8.4.0-1.x86_64.rpm
# chkconfig drbd off

4) create drbd.conf
# cp -p /etc/drbd.conf.rpmsave /etc/drbd.conf

5) refresh meta-data
# drbdadm wipe-md all
# drbdadm create-md all

6) start DRBD
# service drbd start

# cat /proc/drbd
version: 8.4.0 (api:1/proto:86-100)
GIT-hash: 28753f559ab51b549d16bcf487fe625d5919c49c build by
root@bl460g1n13, 2011-07-25 13:46:08
0: cs:SyncTarget ro:Secondary/Primary ds:Inconsistent/UpToDate C  
r-

   ns:0 nr:0 dw:0 dr:263296 al:0 bm:15 lo:22 pe:46 ua:0 ap:0 ep:1
wo:b oos:9513840
   [] sync'ed:  2.7% (9288/9540)M
   finish: 0:03:41 speed: 42,900 (42,900) want: 40,960 K/sec

# cat /proc/drbd
version: 8.4.0 (api:1/proto:86-100)
GIT-hash: 28753f559ab51b549d16bcf487fe625d5919c49c build by
root@bl460g1n13, 2011-07-25 13:46:08
0: cs:Connected ro:Secondary/Primary ds:UpToDate/UpToDate C r-
   ns:0 nr:0 dw:0 dr:9771248 al:0 bm:597 lo:0 pe:0 ua:0 ap:0 ep:1  
wo:b oos:0


Thanks,
Junko



Hi,

I'm attempting a rolling update from 8.3.7 to 8.4.0 (hoping it solves  
my write performance problems), and running into the same  
'drbd_md_sync_page_io(,...s,WRITE) failed!' problem.  I have two nodes  
in primary/secondary with DRBD on top of LVM on top of md RAID10, and  
I get this on all of my DRBD resources.  I'm attempting to update the  
secondary, the primary remained active and in-use on 8.3.7 during this.


I had initially tried it without the metadata wipe/re-create -- after  
seeing this message I retried including those steps, but it didn't  
change anything.  After the first attempt (without the wipe-md/create- 
md) I rolled back to 8.3.7 and when I re-attached my resources it  
appeared that the 8.4.0 upgrade attempt somehow nuked the metadata  
anyway, since 8.3.7 couldn't find a magic number, so I had to do a  
full re-sync.  Now that I've made another attempt and gotten the same  
results, it looks like I'm going to have to do that again.


Anyone else seen this, or have any other advice?

Thanks,
Zev

___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user