Hello all!

Today, I do some "bonnie++" tests on our NFS cluster (active/passive 
with ha 2.1.2 and drbd 0.8.4).

I used bonnie++ on a client for generating much more traffic and load.

client: #mount -t nfs -o udp nfsvip:/export/uploads /mnt
client: #bonnie -d /mnt/ -s 750 -v 8 -p 2

With this test I got a load between 5 and 12. The nfs share grows up 
to 5 GB.

For produce the failover I removed with rmmod the nfsd kernel mod.
(~: #rmmod -f nfsd). 

The result is a very nice crash:)

Heartbeat switched as expected but the drbd failover crashed under the 
high load.
I need a complete reconfiguration of drbd with a new sync.
Also the client crashed, I must reset the client via iLO.

The cluster was unusable;(

Without bonnie++ all failover scenarios works fine.
(reconnect client via udp/tcp, failover, etc)

Have someone experience with high load/traffic network storage 
solutions?

IMHO is a active/active environment a better solution. 

Should I use NFS or is GFS/OCFS2 the better way?

Or is my configuration "bullshit"?

~: #cat /etc/drbd.conf
- - - - - - - - - - - -
global { usage-count no; } # sorry, secure backend, no www;(

common { syncer { rate 100M; } }

resource drbd0
{
    protocol C;

        handlers
        {
        pri-on-incon-degr "echo o > /proc/sysrq-trigger ; halt -f";
                pri-lost-after-sb "echo o > /proc/sysrq-trigger ; 
halt -f";
        local-io-error "echo o > /proc/sysrq-trigger ; halt -f";
                # outdate-peer "/usr/sbin/drbd-peer-outdater";
        }

        startup
        {
                degr-wfc-timeout 20;
        }

        syncer
        {
                rate 100M;
        }

    net
    {
        cram-hmac-alg sha1;
        shared-secret "FooFunFactory";
    }

    on nfs00001 {
        device      /dev/drbd0;
        disk        /dev/cciss/c0d0p5;
        address     1.1.1.1:7788;
        meta-disk   internal;
    }

    on nfs00002 {
        device      /dev/drbd0;
        disk        /dev/cciss/c0d0p5;
        address     1.1.1.2:7788;
        meta-disk   internal;
    }

}
- - - - - - - - - - - - - - - - 

About feedback I would be very glad!


Regards

Andre
_______________________________________________
Linux-HA mailing list
[email protected]
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems

Reply via email to