Hi Marcin,

Hmm, so if you are using multipath with VDSM you have to manually edit the vdsm.conf file to put the right IP in every time the active controller switches? That sort of defeats the purpose of multipath.... That was the issue we were having: we'd spin up another host, it would connect to the SAN which would then reballance the disks among controllers, and all our other hosts would lose their connection to the active controller and pause all of the VMs. It's the "Device is not on preferred path" issue that is common on the MD3x00 line. We had the same errors with VMWare, but VMWare was able to automatically switch to the active path.

On 2017-03-26 05:42 PM, Marcin Kruk wrote:
But on the Dell MD32x00 you have got two controllers. The trick is that you have to sustain link to both controllers, so the best option is to use multipath as Yaniv said. Otherwise you get an error notifications from the array.
The problem is with iSCSI target.
After server reboot, VDSM tries to connect to target which was previously set, but it could be inactive. So in that case you have to remember to edit configuration in vdsm.conf, because vdsm.conf do not accept target with multi IP addresses.

2017-03-26 9:40 GMT+02:00 Yaniv Kaul <yk...@redhat.com <mailto:yk...@redhat.com>>:



    On Sat, Mar 25, 2017 at 9:20 AM, Charles Tassell
    <ctass...@gmail.com <mailto:ctass...@gmail.com>> wrote:

        Hi Everyone,

          I'm about to setup an oVirt cluster with two hosts hitting a
        Linux storage server.  Since the Linux box can provide the
        storage in pretty much any form, I'm wondering which option is
        "best." Our primary focus is on reliability, with performance
        being a close second.  Since we will only be using a single
        storage server I was thinking NFS would probably beat out
        GlusterFS, and that NFSv4 would be a better choice than
        NFSv3.  I had assumed that that iSCSI would be better
        performance wise, but from what I'm seeing online that might
        not be the case.


    NFS 4.2 is better than NFS 3 in the sense that you'll get DISCARD
    support, which is nice.
    Gluster probably requires 3 servers.
    In most cases, I don't think people see the difference in
    performance between NFS and iSCSI. The theory is that block
    storage is faster, but in practice, most don't get to those limits
    where it matters really.


          Our servers will be using a 1G network backbone for regular
        traffic and a dedicated 10G backbone with LACP for redundancy
        and extra bandwidth for storage traffic if that makes a
        difference.


    LCAP many times (especially on NFS) does not provide extra
    bandwidth, as the (single) NFS connection tends to be sticky to a
    single physical link.
    It's one of the reasons I personally prefer iSCSI with multipathing.


          I'll probably try to do some performance benchmarks with 2-3
        options, but the reliability issue is a little harder to test
        for.  Has anyone had any particularly bad experiences with a
        particular storage option?  We have been using iSCSI with a
        Dell MD3x00 SAN and have run into a bunch of issues with the
        multipath setup, but that won't be a problem with the new SAN
        since it's only got a single controller interface.


    A single controller is not very reliable. If reliability is your
    primary concern, I suggest ensuring there is no single point of
    failure - or at least you are aware of all of them (does the
    storage server have redundant power supply? to two power sources?
    Of course in some scenarios it's an overkill and perhaps not
    practical, but you should be aware of your weak spots).

    I'd stick with what you are most comfortable managing - creating,
    backing up, extending, verifying health, etc.
    Y.



        _______________________________________________
        Users mailing list
        Users@ovirt.org <mailto:Users@ovirt.org>
        http://lists.ovirt.org/mailman/listinfo/users
        <http://lists.ovirt.org/mailman/listinfo/users>



    _______________________________________________
    Users mailing list
    Users@ovirt.org <mailto:Users@ovirt.org>
    http://lists.ovirt.org/mailman/listinfo/users
    <http://lists.ovirt.org/mailman/listinfo/users>



_______________________________________________
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

Reply via email to