Agreed, I would probably continue to do it on the dom0 for now and pass-through 
the block devs.  If this solution were used it would give me the flexibility to 
initiate in the guest if I would decide to do so.  I believe there would 
definitely be cpu and network overhead.  Given the cpu's today, however, I 
don't know if I would worry too much about CPU.  Most of the issues I run into 
with virtualization is either storage or memory related, not CPU.   Also, given 
the issues I've heard with regards to tcp offloading and bnx2 drivers, I have 
disabled that in our environment (again, with cpu being more of a commodity 
these days).

Although I don't know the performance tradeoffs, I definitely think it is worth 
investigation --namely because of the flexibility that it gives the admin.   In 
addition to being able to use guest initiation, it allow me to provision a 
volume in our iSCSI storage such that the only system that needs to access it 
is the guest vm.  I don't have to allow ACLs for multiple Xen or KVM systems to 
connect to it.  Another issue specific to the EqualLogic, but may show its dirt 
in other iSCSI systems would be the fact that I can only have, I think, 15-20 
ACL's per volume.  If I have a 10-node cluster of dom0's for my Xen environment 
and each node has 2 iSCSI interfaces = ACLs that may be needed per volume 
(depending on how you write your ACLs).  If the iSCSI volume were initiated int 
he guest, I would just need to include two ACLs for each virtual nic of the 
guest.  

Also, when doing this, I have to do all the `iscsiadm -m discovery`, `iscsiadm 
-m node -T iqn -l`, and then go adjust /etc/multipath.conf on all my dom0 nodes 
before I can finally get the volume's block device to pass-through to the 
guest.  With the "iSCSI guest initiation" solution, I would just need to do 
this on the guest alone.  

Another issue that I have today is that if I ever want to move a vm from one 
xen cluster to another, I would need to not only rsync that base iSCSI image 
over to the other vm (because we still use image files for our root disk and 
swap), but also change around the ACLs so that the other cluster has access to 
those iSCSI volumes.  Again, look back at the last couple of paragraphs 
regarding ACLs.  If the guest were doing the initiation, I would just rsync the 
root img file over to the other cluster and startup the guest. It would still 
be the only one with the ACLs to connect.

However, one thing that would be worse with regards to passing iSCSI traffic 
through into the guest is that now the iSCSI network is less secure (because 
the initiation is being done inside the guest and not just passed through as a 
block device into the guest).  So this is something to be concerned with.

Thanks,
Joe

===========================
Joseph R. Hoot
Lead System Programmer/Analyst
[email protected]
GPG KEY:   7145F633
===========================

On Mar 9, 2010, at 8:06 AM, Ciprian Marius Vizitiu (GBIF)_ wrote:

> On 03/09/2010 01:49 PM, Hoot, Joseph wrote:
>> I had a similar issue, just not using bonding.  The gist of my problem was 
>> that, when connecting a physical network card to a bridge, iscsiadm will not 
>> login through that bridge (at least in my experience).  I could discover 
>> just fine, but wasn't ever able to login.  I am no longer attempting (at 
>> least for the moment because of time) to get it working this way, but I 
>> would love to change our environment in the future if a scenario such as 
>> this would work, because it gives me the flexibility to pass a virtual 
>> network card through to the guest and allow the guest to initiate its own 
>> iSCSI traffic instead of me doing it all at the dom0 level and then passing 
>> those block devices through.
>> 
> Out of curiosity, why would you do that? Why let the guest bear the 
> iSCSI load instead of the host OS offering block devices? Eventually the 
> host OS could use "hardware acceleration" (assuming that works)?
> 
> Anybody care to give and argument because from what I've seen iSCSI load 
> gets distributed to various CPUs in funny ways. Assuming KVM and no 
> hardware iSCSI, have the host do iSCSI and the guests with "Realtek" emu 
> cards, the iSCSI CPU load gets distributed. Have the guest do the iSCSI, 
> again with Realtek emu one can see only the CPUs allocated to that guest 
> being used; and heavy usage. But then switch to vrtio for network and 
> the iSCSI load is once again spread through multiple CPUs no matter 
> who's doing the iSCSI. At least for 1 guest... so which poison to chose?
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "open-iscsi" group.
> To post to this group, send email to [email protected].
> To unsubscribe from this group, send email to 
> [email protected].
> For more options, visit this group at 
> http://groups.google.com/group/open-iscsi?hl=en.
> 

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to 
[email protected].
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.

Reply via email to