I have done something similar in a home-rolled SAN, and am fairly
pleased with the results. I wasn't sure from your post if you were
looking to build the in-band virtualization, or if you you had a switch
that was capable of doing that. I could not afford a specialized
iscsi-switch, so I used stock ethernet switches and built my own in-band
virtualization system. Hopefully my experience can be of some use to you.
I use IET (iSCSI Enterprise Target) on boxes with internal disk arrays
to make 4 iscsi targets. Then I connect this across bonded interfaces
and twp IP fabrics to an in-band virtualizer (two, actually, running in
fail-over mode). The job of this virtualizer is similar to what you are
looking for, I think.
Our virtualizer acts as an iscsi initiator to the 4 iscsi target
devices, and joins them into one large raid5 device. I then use LVM to
carve out the virtual disks, and present them as iscsi targets to
other systems. If you did the same thing with raid1 (mirrored) targets,
then an iscsi request would read in a round-robin fashion from the two
in-sync targets at your remote locations.
The problem that I ran into was in keeping performance high. You end up
passing the data through a lot of layers
(hardware-iscsi-multipath-software raid-lvm-iscsi-multipath-OS).
If you want to ENSURE that your data lands safely on the disk, then you
have to turn off the software caching along the way, and this really
hurt my performance. I now run with caching enabled in the virtualizer
iscsi target, and performance is much better. But if you do that you
need to make sure that your virtualizer does not crash, or you will
loose the dirty blocks in the cache.
I was looking for a generalized solution to hook to many different
servers. If you are only dealing with one application then you should
be able to get pretty good performance by careful tuning.
The good side of going in-band is that it is generally easier and less
invasive, since there are no agents to install in every OS to manage the
out-of-band controls. But the downside is that all your data must flow
through the in-band system and that can slow things down.
Now, if you are looking at doing this inside of an iscsi-capable switch,
then the same issues apply. An iscsi-capable switch is like my setup,
but they have wrapped my virtualizer and my stock ethernet switches into
one device (which is far more specialized than my generic devices). You
still should look at how they handle caching in the switch, and how well
you can tune and protect that.
One other thought -- is this necessary? If you are just looking to read
off of two iscsi targets that have the same data, then it might be
simpler just to connect your server to the two iscsi targets in the SAN
and use raid/multipath. That should take the OS system call to read the
data, and split that into iscsi commands that pull from each of the
targets. This takes some setup on the server, but it would be much simpler!
-Ty!
varun wrote:
I am thinking of a design to improve performance of SAN. To do this i
am thinking of an in band switch based virtualization which recieves
an ISCSI request and splits the request into two and simultaneously
reads data from two insync disk at remote locations. tell me is this
solution feasible !! and i want software based solution with minimum
hardware involvement ! tell me how out of band storage virtualization
can help me in this ??
--
-===-
Ty! Boyack
NREL Unix Network Manager
[EMAIL PROTECTED]
(970) 491-1186
-===-
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups
open-iscsi group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---