On Thu, May 29, 2008 at 02:35:28PM -0300, Arturo 'Buanzo' Busleiman wrote:
> 
> On May 28, 2:45 pm, Konrad Rzeszutek <[EMAIL PROTECTED]> wrote:
>  > I am not sure how you are partitioning your space. Does each guest
>  > have an iSCSI target (or LUN) assigned to it? Or is it one big
>  > drive that they run from? Also are you envisioning using this
>  > with LiveMigration (or whatever it is called with your virtualization
>  > system)?
> 
> I'm using Vmware-Server (not ESX, just the free one).
> 
> The guests themselves (the disk where the OS is installed) are stored as 
> vmdk's on a local folder.
> 
> I want to provide application storage for each virtual machine, no 
> shared storage. I have 1.6TB total capacity, and plan on giving each 
> guest as much raid-5 storage space as they need.
> 
> The iscsiadm discovery on my Host reports all available targets, over 
> both interfaces (broadcom and intel).
> 
> So, basicly, I have these doubts / options:
> 
> 1) Login to each target on the host, and add raw disk access to the 
> guests to those host-devices.
> 2) Don't use open-iscsi on the host, but use it on each guest to connect 
> to the targets.
> 

If you run iSCSI on each guests you end up with overhead. Each guests will
have to do its own iSCSI packet assembling/disassembling, along with doing
socket operations (TCP, IP assembling) and your target will X-Guests
connections. Each guest would need to run the multipath suite which puts
I/O on the connection every 40 seconds (or less if a failure has occurred).

If on the other hand you make the connection on your host, setup
multipath there, create LVMs and assign them to each your guests you have:
 - less overhead (one OS doing the iSCSI packet assembling/disassembling),
   TCP/IP assembling.
 - one connection to the target. You can even purchase extra two NICs and
   create your own subnet for them and the target so that there is no
   traffic there (except iSCSI).
 - one machine running multipath and you can make it queue I/O from
   place if the network goes down. This will block the guests (you might
   need to change the SCSI timeout in the guests - no idea what registry
   key you need to change for this in Windows).
 - One place to zone out your huge capacity and you can resize them
   as you see fit from one place (using LVMs for guests and you can
   re-size them).

> And the main doubt: how does link aggregation / dualpath fit into those 
> options?

I can't give you an opinion about link aggregation as I don't have that
much experience in this field.

But in regards to multipath you are better of doing it on your host
than on the guest.
>  
> Also, i find this error:
> 
> [EMAIL PROTECTED]:~# iscsiadm -m node -L all
> Login session [iface: default, target: 
> iqn.1984-05.com.dell:powervault.6001e4f0004326c100000000482127e3, 
> portal: 192.168.130.102,3260]
> Login session [iface: default, target: 
> iqn.1984-05.com.dell:powervault.6001e4f0004326c100000000482127e3, 
> portal: fe80:0000:0000:0000:021e:4fff:fe43:26c3,3260]
> iscsiadm: initiator reported error (4 - encountered connection failure)
> iscsiadm: Could not log into all portals. Err 107.

Did you configure your ethX to use IPV6? The second target IP 
is in IPv6 format.

> 
> I'm using crossover cables.

No switch? Then link-aggregation wouldn't matter I would think (since
the ARP requests aren't going to a switch).
> 
> 
> > 

--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~----------~----~----~----~------~----~------~--~---

Reply via email to