Hi all. I'm very happy with using srpt disk via ib_srp (linux 3.10).
Storage used by virtual machines (xen), target side (2 servers):
Software raid 10 with lvm on top of it.
scst.conf:
HANDLER vdisk_fileio {
        DEVICE sas00_md127 {
                filename /dev/md127
                nv_cache 1
        }
}

TARGET_DRIVER ib_srpt {
        TARGET ib_srpt_target_0 {
                enabled 1
                io_grouping_type this_group_only
                rel_tgt_id 1

                LUN 0 sas00_md127
        }
}

LVM (pv /dev/md127) exported to initiator node (xen).

On initiator node from both servers disks are goes to multipath.
On this pv i have many lv for each virtual machine.
For each vps i'm assemble raid1 from 2 lv (lv from different target nodes).

Sometimes all works fine, sometimes not. Which sysfs/procfs entries i
can check on initiator and target side to determine bottleneck? I'm
try to run blktrace to md device provide for vps, but not get needed
info.

Linux 3.10
SCST v3.0.0-pre2 (svn 4973)
Debian 7

Can somebody helps me?

-- 
Vasiliy Tolstov,
e-mail: [email protected]
jabber: [email protected]
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to