That could be it.  Is GFS2 stable enough to use reliably?  It is 
currently listed as Experimental.


On 8/4/09 9:47 AM, Gianluca Cecchi wrote:
> what is the file system for /data?
> It has to be GFS or GFS2 in dual primary
>
>
> On Tue, Aug 4, 2009 at 5:38 PM, Robert L.
> Harris<[email protected]>  wrote:
>    
>> hmm,
>>    I added the section on splitbrain,  missed that when I went through
>> the first time.  I did follow
>> the install instructions and it looked good unless I'm missing something
>> stupid.  I just rebooted
>> both machines and it looks MUCH happier.  Now both hosts report this:
>>
>>
>> rob...@grandpa:/$ cat /proc/drbd
>> version: 8.3.0 (api:88/proto:86-89)
>> GIT-hash: 9ba8b93e24d842f0dd3fb1f9b90e8348ddb95829 build by
>> iv...@ubuntu, 2009-01-17 07:49:56
>>   0: cs:Connected ro:Primary/Primary ds:UpToDate/UpToDate C r---
>>      ns:92 nr:12292 dw:12384 dr:249 al:4 bm:2 lo:0 pe:0 ua:0 ap:0 ep:1
>> wo:d oos:0
>>
>> r...@grandma:~# cat /proc/drbd
>> version: 8.3.0 (api:88/proto:86-89)
>> GIT-hash: 9ba8b93e24d842f0dd3fb1f9b90e8348ddb95829 build by
>> iv...@ubuntu, 2009-01-17 07:49:56
>>   0: cs:Connected ro:Primary/Primary ds:UpToDate/UpToDate C r---
>>      ns:12292 nr:92 dw:96 dr:12505 al:1 bm:4 lo:0 pe:0 ua:0 ap:0 ep:1
>> wo:d oos:0
>>
>> Unfortunately I tried doing some chgrp and cp funtions.  On grandpa I
>> did a "chgrp -R users /data" and copied
>> /etc/fstab into it.  Looks good.  None of the changes propogated to
>> grandma though.
>>
>> It has been sitting for about 30 minutes with no change.
>>
>> Robert
>>
>>
>>
>> On 8/4/09 8:23 AM, Michael Schwartzkopff wrote:
>>      
>>> Am Dienstag, 4. August 2009 16:09:59 schrieb Robert L. Harris:
>>>
>>>        
>>>> I am trying to get a 2 node DRBD setup running.  I thought I had it as I
>>>> was able to drop
>>>> a file on the first node, grandpa, and it was visible on both.  A power
>>>> cycle later, there's
>>>> no two way syncing and when it boots says it's waiting for the other
>>>> node to come up
>>>> and will wait forever.
>>>>
>>>> Here's what I have currently:
>>>> r...@grandpa:/etc/rc2.d# cat /proc/drbd
>>>> version: 8.3.0 (api:88/proto:86-89)
>>>> GIT-hash: 9ba8b93e24d842f0dd3fb1f9b90e8348ddb95829 build by
>>>> iv...@ubuntu, 2009-01-17 07:49:56
>>>>     0: cs:StandAlone ro:Primary/Unknown ds:UpToDate/DUnknown   r---
>>>>        ns:792080589 nr:84 dw:204 dr:792080718 al:6 bm:48345 lo:0 pe:0 ua:0
>>>> ap:0 ep:1 wo:d oos:0
>>>>
>>>> r...@grandma:~# cat /proc/drbd
>>>> version: 8.3.0 (api:88/proto:86-89)
>>>> GIT-hash: 9ba8b93e24d842f0dd3fb1f9b90e8348ddb95829 build by
>>>> iv...@ubuntu, 2009-01-17 07:49:56
>>>>     0: cs:WFConnection ro:Primary/Unknown ds:UpToDate/DUnknown C r---
>>>>        ns:0 nr:0 dw:0 dr:0 al:0 bm:2 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b 
>>>> oos:12288
>>>>
>>>>
>>>> r...@grandpa:/etc/rc2.d# cat /etc/drbd.conf
>>>> common {
>>>>      syncer { rate 60M; }
>>>> }
>>>>
>>>> resource r0 {
>>>>
>>>>     protocol C;
>>>>      handlers {
>>>>        pri-on-incon-degr "echo o>    /proc/sysrq-trigger ; halt -f";
>>>>      }
>>>>
>>>>      net {
>>>>        allow-two-primaries;
>>>>      }
>>>>
>>>>      startup {
>>>>        become-primary-on both;
>>>>      }
>>>>
>>>>     on grandpa {                # ** EDIT ** the hostname of server 1
>>>>       device     /dev/drbd0;        #
>>>>       disk       /dev/mapper/isw_dfeijegdcg_Volume04;         # ** EDIT **
>>>> data partition on server 1
>>>>       address    192.168.0.20:7788; # ** EDIT ** IP address on server 1
>>>>       meta-disk  /dev/mapper/isw_dfeijegdcg_Volume03[0];      # ** EDIT **
>>>> 128MB partition for DRBD on server 1
>>>>      }
>>>>
>>>>     on grandma {                # ** EDIT ** the hostname of server 2
>>>>       device    /dev/drbd0;         #
>>>>       disk      /dev/mapper/isw_eafchajabg_Volume04;          # ** EDIT **
>>>> data partition on server 2
>>>>       address   192.168.0.21:7788;  # ** EDIT ** IP address on server 2
>>>>       meta-disk /dev/mapper/isw_eafchajabg_Volume03[0];       # ** EDIT **
>>>> 128MB partition for DRBD on server 2
>>>>      }
>>>> }
>>>>
>>>>
>>>> Anyone see anything or have any ideas?  I need to have this live Monday.
>>>>
>>>> Robert
>>>>
>>>>          
>>> Recovery. See:
>>> http://www.drbd.org/docs/working/
>>> chapter "manual split brain recovery"
>>>
>>> I have seen this often during the cluster setup but it shold not happen 
>>> during
>>> operation afterwards, when nobody is playing around with the cluster.
>>>
>>> Also be sure you read
>>> http://www.drbd.org/docs/install/
>>> section "Automatic split brain recovery policies"
>>>
>>> Did you install everything accordingly?
>>>
>>> Greetings,
>>>
>>>
>>>        
>> --
>>
>> :wq!
>> ====================================================================
>> Robert L. Harris                     | GPG Key ID: E344DA3B
>>                                           @ x-hkp://pgp.mit.edu
>> DISCLAIMER:
>>        These are MY OPINIONS             With Dreams To Be A King,
>>         ALONE.  I speak for              First One Should Be A Man
>>         no-one else.                       - Manowar
>>
>>
>> _______________________________________________
>> Linux-HA mailing list
>> [email protected]
>> http://lists.linux-ha.org/mailman/listinfo/linux-ha
>> See also: http://linux-ha.org/ReportingProblems
>>
>>      
> _______________________________________________
> Linux-HA mailing list
> [email protected]
> http://lists.linux-ha.org/mailman/listinfo/linux-ha
> See also: http://linux-ha.org/ReportingProblems

-- 

:wq!
====================================================================
Robert L. Harris                     | GPG Key ID: E344DA3B
                                          @ x-hkp://pgp.mit.edu
DISCLAIMER:
       These are MY OPINIONS             With Dreams To Be A King,
        ALONE.  I speak for              First One Should Be A Man
        no-one else.                       - Manowar


_______________________________________________
Linux-HA mailing list
[email protected]
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems

Reply via email to