Hi,

I have cluster in Active/Passive configuration. Currently I am trying to 
support situation when I/O errors occur. I noticed that in drbd.conf default 
behaviour is halt node with failed disks. This is a little bit brutal for me. 
What kind of other scenarios are here taken into account, if any? 
I was considering only disconnecting replication for resources where I/O errors 
occured and then promoting second node, but as I know this is not possible when 
resource is in Diskless state (btw. why?):

drbdadm disconnect myresource                                                   
                                                                                
                                                                           
0: State change failed: (-2) Refusing to be Primary without at least one 
UpToDate disk                                                                   
                                                                                
    
Command 'drbdsetup 0 disconnect' terminated with exit code 17


Best regards
M

_______________________________________________
drbd-user mailing list
[email protected]
http://lists.linbit.com/mailman/listinfo/drbd-user

Reply via email to