hi andre,

thanks for your input. As I checked, I used 1.4.4 lenny-backports package.

Thanks!

Oliver

On Thu, Dec 9, 2010 at 8:27 PM, andrelst <amva...@gmail.com> wrote:

> OCFS2+DRBD uses your NIC for transport on these strategy. Better go for
> OCFS2 (NIC Transport) + Multipathing(Fibre Channel transport backend) for
> less traffic on the NIC.
>
> FYI, stock Debian, Ubuntu and SuSE(and yes, even the latest SLES 11 SP1
> addon HA.) use OCFS2 1.4.3. Watch out for these because you may be hit by
> the "Orphan file" Bug which I painfully found out. Better use minimum 1.4.4
> or higher which fixes these problems. A chicken and egg situation here. The
> Oracle project website only has RHEL binaries, so you have to do ala going
> back to the Linux compile from scratch 90's.... :)
>
> regards,
> Andre | http://www.varon.ca
>
> On Thu, Dec 9, 2010 at 10:20 PM, Linux Cook <linuxc...@gmail.com> wrote:
>
>> hi jan,
>>
>> I need to mount it on both machines because I'm inserting data from both
>> since I'm working on active-active web cluster. Yeah I realized that since
>> it cannot be mounted on both machines, I used active-active drbd setup
>> instead and mount it using pacemaker.
>> So its OCFS2 + DRBD thingy which worked for me.
>>
>> Thanks for all your inputs guys!
>> I really appreciate it.
>>
>> Oliver
>>
>>
>> On Thu, Dec 9, 2010 at 10:37 PM, jan gestre <plugger.l...@gmail.com>wrote:
>>
>>>
>>>
>>> On Wed, Dec 8, 2010 at 3:09 PM, Linux Cook <linuxc...@gmail.com> wrote:
>>>
>>>> hi guys,
>>>>
>>>> I bumped into a problem after settingup OCFS2. I'm trying to mount the
>>>> OCFS2 filesystem into both nodes by adding it into /etc/fstab but only the
>>>> primary node gets the mount. The secondary node didn't mount anything.
>>>>
>>>> Any thoughts?
>>>>
>>>> Oliver
>>>>
>>>>
>>>> On Tue, Dec 7, 2010 at 12:01 PM, Linux Cook <linuxc...@gmail.com>wrote:
>>>>
>>>>> Guys,
>>>>>
>>>>> Thanks for all your inputs and I really really appreciate it. As I
>>>>> mentioned I used OCFS2 with multipathing and that worked for me.
>>>>>
>>>>> Thanks again!
>>>>>
>>>>> Oliver
>>>>>
>>>>> On Tue, Dec 7, 2010 at 11:45 AM, Federico Sevilla III <j...@fs3.ph>wrote:
>>>>>
>>>>>> Hi Oliver,
>>>>>>
>>>>>> Assuming you know the risks involved with what you're trying to do,
>>>>>> then
>>>>>> the missing piece is using what is called a shared disk file system.
>>>>>> You
>>>>>> already mentioned OCFS2, another option would be GFS (Global File
>>>>>> System). I'm not sure if btrfs and ZFS are shared disk file systems,
>>>>>> but
>>>>>> it's worth a check.
>>>>>>
>>>>>> The reason "what you are doing is very dangerous" is if you're not
>>>>>> using
>>>>>> a shared disk file system, you basically end up with lost data at
>>>>>> best,
>>>>>> but more probably a corrupt and useless file system at the end.
>>>>>> "Normal"
>>>>>> file systems are used to having exclusive write access to their block
>>>>>> device.
>>>>>>
>>>>>> Good luck, and have fun.
>>>>>>
>>>>>> Cheers!
>>>>>>
>>>>>> --
>>>>>> Federico Sevilla III, CISSP, CSM, LPIC-2
>>>>>> Chief Executive Officer
>>>>>> F S 3 Consulting Inc.
>>>>>> http://www.fs3.ph
>>>>>>
>>>>>>
>>>>>> On Tue, 2010-12-07 at 11:21 +0800, Linux Cook wrote:
>>>>>> > okay some guys told me i should be using ocfs2? would this really
>>>>>> > help?
>>>>>> >
>>>>>> > On Tue, Dec 7, 2010 at 9:50 AM, Jimmy Lim <jimmyb...@gmail.com>
>>>>>> wrote:
>>>>>> >         Hi Oliver,
>>>>>> >
>>>>>> >
>>>>>> >         What you are doing is very dangerous!  You can present the
>>>>>> >         LUNs on the 2 servers, but only *one* can only write to it.
>>>>>> >
>>>>>> >
>>>>>> >         If you want to achieve redundancy on your server, I believe
>>>>>> it
>>>>>> >         is better to get the HP Service Guard (but this is not a
>>>>>> free
>>>>>> >         software).
>>>>>> >
>>>>>> >
>>>>>> >         http://docs.hp.com/en/ha.html
>>>>>> >
>>>>>> >
>>>>>> >         HTH
>>>>>> >
>>>>>> >
>>>>>> >         Jimmy
>>>>>> >
>>>>>> >
>>>>>> >         On Tue, Dec 7, 2010 at 1:34 AM, Linux Cook
>>>>>> >         <linuxc...@gmail.com> wrote:
>>>>>> >
>>>>>> >
>>>>>> >                 Hi pluggers,
>>>>>> >
>>>>>> >                 I've just configured multipathing on my debian boxes
>>>>>> >                 (Server A and Server B) using HP StorageWorks with
>>>>>> >                 Dual FCs on each server and can now mount the path
>>>>>> >                 alias I defined on my multipath configuration. But
>>>>>> >                 everytime I write a data on Server A, the data are
>>>>>> not
>>>>>> >                 reflecting on Server B.
>>>>>> >
>>>>>> >                 Any help?
>>>>>> >
>>>>>> >                 Oliver
>>>>>>
>>>>>>
>>>>>> _________________________________________________
>>>>>>
>>>>>
>>> Hi Oliver,
>>>
>>> I'm confused, would you care to enlighten? Why are you trying to
>>> accomplish in the first place? I'm assuming you're setting up an HA cluster
>>> hence the need for shared disk and multipath ...., correct? If this is what
>>> you're trying to achieve then you're doing it all wrong, the partition
>>> should only be mounted on one server e.g. Server A, it will only be mounted
>>> on  Server B if something happens to Server A e.g. hardware failure. The
>>> shared drive should not be mounted on both machines at the same time or all
>>> hell will break loose.
>>>
>>> Mounting will be done by your cluster manager e.g. Heartbeat or Redhat
>>> cluster manager.
>>>
>>> BTW, you should be fine with either OCFS2 or GFS2 as filesystem.
>>>
>>> HTH.
>>>
>>> Jan
>>>
>>
>>
>> _________________________________________________
>> Philippine Linux Users' Group (PLUG) Mailing List
>> http://lists.linux.org.ph/mailman/listinfo/plug
>> Searchable Archives: http://archives.free.net.ph
>>
>
>
_________________________________________________
Philippine Linux Users' Group (PLUG) Mailing List
http://lists.linux.org.ph/mailman/listinfo/plug
Searchable Archives: http://archives.free.net.ph

Reply via email to