On 03/09/2010 01:49 PM, Hoot, Joseph wrote:
Out of curiosity, why would you do that? Why let the guest bear the
iSCSI load instead of the host OS offering block devices? Eventually the
host OS could use "hardware acceleration" (assuming that works)?
I had a similar issue, just not using bonding. The gist of my problem was
that, when connecting a physical network card to a bridge, iscsiadm will not
login through that bridge (at least in my experience). I could discover just
fine, but wasn't ever able to login. I am no longer attempting (at least for
the moment because of time) to get it working this way, but I would love to
change our environment in the future if a scenario such as this would work,
because it gives me the flexibility to pass a virtual network card through to
the guest and allow the guest to initiate its own iSCSI traffic instead of me
doing it all at the dom0 level and then passing those block devices through.
Anybody care to give and argument because from what I've seen iSCSI load
gets distributed to various CPUs in funny ways. Assuming KVM and no
hardware iSCSI, have the host do iSCSI and the guests with "Realtek" emu
cards, the iSCSI CPU load gets distributed. Have the guest do the iSCSI,
again with Realtek emu one can see only the CPUs allocated to that guest
being used; and heavy usage. But then switch to vrtio for network and
the iSCSI load is once again spread through multiple CPUs no matter
who's doing the iSCSI. At least for 1 guest... so which poison to chose?
You received this message because you are subscribed to the Google Groups
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to
For more options, visit this group at