Re: Design Questions

2008-07-02 Thread Mike Christie

Eli Dorfman wrote:
>> open-iscsi does not really support link local ipv6 adders yet. There is
>> a way around this by setting up a iface for the net inerface (ethX) that
>> it is local to then binding it, but it is a little complicated to do.
>>
> Can you explain in details how to do that.

Ok not that complicated. Just bind to a iface like you would normally.

Setup a iface and bind it do the ipv6 link local portal that was 
discovered by doing:

iscsiadm -m node -T target -p your_ipv6_link_local_addr -I 
iface_you_created_that_can_access_ipv6_link_local_portal -o new

then login like normal.

the problem is that you cannot do discovery to a link local addr though.


> When do you think ipv6 will be supported? what needs to be done?
> 

I have no plans to support ipv6 link local addrs. You need some code to 
tell it what network interface to use or maybe loop over every network 
interface and figure out which one to use for the user.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



Re: Design Questions

2008-07-02 Thread Eli Dorfman

> open-iscsi does not really support link local ipv6 adders yet. There is
> a way around this by setting up a iface for the net inerface (ethX) that
> it is local to then binding it, but it is a little complicated to do.
>
Can you explain in details how to do that.
When do you think ipv6 will be supported? what needs to be done?

Thanks,
Eli

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



Re: Problems accessing virtual disks on MD3000i, was: RE: Design Questions

2008-06-02 Thread Arturo 'Buanzo' Busleiman

Arturo 'Buanzo' Busleiman wrote:
> OK, don't ask me why, but I just added a 20gb virtual disk on LUN-3, 
> and BANG, the host SAW it with no problems whatsoever.
>
> Maybe 100gb and/or lun-1 makes any difference? i hope not!
Well, it was just a matter of going to Modify -> change preferred path, 
and put the disk I couldn't access on one of the two controller ports I 
see on this server.

I guess moving from a cross-over cabling to a switched solution is what 
I'll do next. (and setup multipath).

Buanzo.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



Re: Problems accessing virtual disks on MD3000i, was: RE: Design Questions

2008-06-02 Thread Konrad Rzeszutek

> Apparently, no changes, except that there are no partitions sde through sdg.
> hwinfo and fdisk still report the same.
> 
> Btw, I also removed the Access DellUtility partition. No difference 
> either, except that /dev/sdd disappeared :)
> 
> Any other ideas?

multipath-tools? Did you install it?

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



Re: Problems accessing virtual disks on MD3000i, was: RE: Design Questions

2008-06-02 Thread Arturo 'Buanzo' Busleiman

OK, don't ask me why, but I just added a 20gb virtual disk on LUN-3, and 
BANG, the host SAW it with no problems whatsoever.

Maybe 100gb and/or lun-1 makes any difference? i hope not!

Buanzo,
a very puzzled man.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



RE: Problems accessing virtual disks on MD3000i, was: RE: Design Questions

2008-06-02 Thread Bryan Mclellan

You mean the preferred controller module on the MD3000i SAN, I assume.

I'd make sure you can ping all of the nodes (four if you have two controllers).
Discover them all via sendtargets and log in to all of them.

 iscsiadm -m discovery -t st -p 192.168.130.101
 iscsiadm -m node -l
 fdisk -l

You should end up with two or four devices (depending on number of 
controllers); one for each virtual disk mapped to that host for each node 
you've logged in to, provided you've removed the Access mapping, which should 
be just fine. Fdisk -l should print a partition table, or lack thereof, for all 
the disks it can read (which should be half of number of nodes)

It took me a while to figure out that I couldn't access the disks via the 
second controller and playing around with iscsiadm a lot is what finally clued 
me in to it. It helped that I already had a test partition on the virtual disk 
created elsewhere so cat '/proc/partitions' revealed that the partition was 
only visible on two of the disk devices not all four.

Bryan

As a side note, I'd double check your subnet configurations on the controllers. 
Each controller should only have one interface on a specific subnet. I don't 
think this is related to your current problem though.

-Original Message-
From: open-iscsi@googlegroups.com [mailto:[EMAIL PROTECTED] On Behalf Of Arturo 
'Buanzo' Busleiman
Sent: Monday, June 02, 2008 12:08 PM
To: open-iscsi@googlegroups.com
Subject: Re: Problems accessing virtual disks on MD3000i, was: RE: Design 
Questions


Hi Bryan,

I changed my setup to only initiate sessions to the primary domain
controller
. This is my dmesg output now:



Apparently, no changes, except that there are no partitions sde through sdg.
hwinfo and fdisk still report the same.

Btw, I also removed the Access DellUtility partition. No difference
either, except that /dev/sdd disappeared :)

Any other ideas?




--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



Re: Problems accessing virtual disks on MD3000i, was: RE: Design Questions

2008-06-02 Thread Arturo 'Buanzo' Busleiman

Hi Bryan,

I changed my setup to only initiate sessions to the primary domain 
controller
. This is my dmesg output now:

[  225.682760] sd 3:0:0:1: [sdc] 209715200 512-byte hardware sectors 
(107374 MB)
[  225.683124] sd 3:0:0:1: [sdc] Write Protect is off
[  225.683129] sd 3:0:0:1: [sdc] Mode Sense: 77 00 10 08
[  225.683856] sd 3:0:0:1: [sdc] Write cache: enabled, read cache: 
enabled, supports DPO and FUA
[  225.685255] sd 3:0:0:1: [sdc] 209715200 512-byte hardware sectors 
(107374 MB)
[  225.685640] sd 3:0:0:1: [sdc] Write Protect is off
[  225.685644] sd 3:0:0:1: [sdc] Mode Sense: 77 00 10 08
[  225.686757] sd 3:0:0:1: [sdc] Write cache: enabled, read cache: 
enabled, supports DPO and FUA
[  225.686763]  sdc:end_request: I/O error, dev sdc, sector 0
[  226.205534] Buffer I/O error on device sdc, logical block 0
[  226.738010] end_request: I/O error, dev sdc, sector 0
[  226.738018] Buffer I/O error on device sdc, logical block 0
[  227.253843] end_request: I/O error, dev sdc, sector 0
[  227.253850] Buffer I/O error on device sdc, logical block 0
[  227.758117] end_request: I/O error, dev sdc, sector 0
[  227.758124] Buffer I/O error on device sdc, logical block 0
[  228.285522] end_request: I/O error, dev sdc, sector 0
[  228.285530] Buffer I/O error on device sdc, logical block 0
[  228.802647] end_request: I/O error, dev sdc, sector 0
[  228.802654] Buffer I/O error on device sdc, logical block 0
[  229.333818] end_request: I/O error, dev sdc, sector 0
[  229.333823] Buffer I/O error on device sdc, logical block 0
[  229.849659] end_request: I/O error, dev sdc, sector 0
[  229.849664] Buffer I/O error on device sdc, logical block 0
[  230.365489] end_request: I/O error, dev sdc, sector 0
[  230.365495] Buffer I/O error on device sdc, logical block 0
[  230.365561] Dev sdc: unable to read RDB block 0
[  230.881333] end_request: I/O error, dev sdc, sector 0
[  230.881339] Buffer I/O error on device sdc, logical block 0
[  231.397134] end_request: I/O error, dev sdc, sector 0
[  231.397139] Buffer I/O error on device sdc, logical block 0
[  231.913060] end_request: I/O error, dev sdc, sector 24
[  232.428813] end_request: I/O error, dev sdc, sector 24
[  232.944649] end_request: I/O error, dev sdc, sector 0
[  233.460491] end_request: I/O error, dev sdc, sector 0
[  233.460580] sd 3:0:0:1: [sdc] Attached SCSI disk
[  233.976394] end_request: I/O error, dev sdc, sector 0
[  234.492213] end_request: I/O error, dev sdc, sector 0
[  235.008043] end_request: I/O error, dev sdc, sector 0
[  235.523861] end_request: I/O error, dev sdc, sector 0
[  297.041718] end_request: I/O error, dev sdc, sector 0
[  297.041728] Buffer I/O error on device sdc, logical block 0
[  297.557553] end_request: I/O error, dev sdc, sector 0
[  297.557561] Buffer I/O error on device sdc, logical block 0
[  298.090022] end_request: I/O error, dev sdc, sector 0
[  298.090031] Buffer I/O error on device sdc, logical block 0
[  298.090094] Buffer I/O error on device sdc, logical block 1
[  298.090166] Buffer I/O error on device sdc, logical block 2
[  298.090222] Buffer I/O error on device sdc, logical block 3
[  298.622514] end_request: I/O error, dev sdc, sector 0
[  298.622521] Buffer I/O error on device sdc, logical block 0
[  299.138350] end_request: I/O error, dev sdc, sector 209715192
[  299.138358] Buffer I/O error on device sdc, logical block 26214399
[  299.654205] end_request: I/O error, dev sdc, sector 209715192
[  299.654214] Buffer I/O error on device sdc, logical block 26214399
[  300.170001] end_request: I/O error, dev sdc, sector 0
[  300.170009] Buffer I/O error on device sdc, logical block 0
[  300.669786] end_request: I/O error, dev sdc, sector 0
[  308.989195] end_request: I/O error, dev sdc, sector 0
[  308.989205] Buffer I/O error on device sdc, logical block 0
[  308.989280] Buffer I/O error on device sdc, logical block 1
[  309.505007] end_request: I/O error, dev sdc, sector 0
[  310.020849] end_request: I/O error, dev sdc, sector 209715192
[  310.536690] end_request: I/O error, dev sdc, sector 209715192
[  311.052532] end_request: I/O error, dev sdc, sector 0
[  311.568367] end_request: I/O error, dev sdc, sector 0
[  321.585585] end_request: I/O error, dev sdc, sector 0
[  322.101500] end_request: I/O error, dev sdc, sector 0
[  322.101509] Buffer I/O error on device sdc, logical block 0
[  322.833613] end_request: I/O error, dev sdc, sector 0
[  323.349434] end_request: I/O error, dev sdc, sector 0
[  355.214898] end_request: I/O error, dev sdc, sector 0
[  355.214905] Buffer I/O error on device sdc, logical block 0
[  355.747385] end_request: I/O error, dev sdc, sector 0
[  355.747394] Buffer I/O error on device sdc, logical block 0
[  356.279864] end_request: I/O error, dev sdc, sector 0
[  356.279873] Buffer I/O error on device sdc, logical block 0
[  356.279935] Buffer I/O error on device sdc, logical block 1
[  356.279996] Buffer I/O error on device sdc, logical block 2
[  356.280051] Buffer I/O error

Problems accessing virtual disks on MD3000i, was: RE: Design Questions

2008-06-02 Thread Bryan Mclellan

How many controllers are in your MD3000i? If you have two, make sure you're 
logging in to the interfaces on the controller that is currently the preferred 
controller (you can tell in MDSM). That dmesg output and not being able to use 
fdisk against the disk is exactly what I get when I log in to the interfaces on 
the standby controller.

Bryan

http://blog.loftninjas.org/?p=195

-Original Message-
From: open-iscsi@googlegroups.com [mailto:[EMAIL PROTECTED] On Behalf Of Arturo 
'Buanzo' Busleiman
Sent: Monday, June 02, 2008 9:32 AM
To: open-iscsi@googlegroups.com
Subject: Re: Design Questions


On May 30, 10:29 am, Konrad Rzeszutek <[EMAIL PROTECTED]> wrote:
 > If you run iSCSI on each guests you end up with overhead.

I decided the same. So I'm running iscsi on the host, and via
/dev/disk/by-path/whatever provide the guests with SAN storage.

Bad thing there, for each virtual disk of the san, I get two /dev
entries, so I'm wondering how to setup the multipath over those two
/dev/disk/by-path entries (one over each controller).

I also noticed the IPv^ thing, so I ended up specifying the IP by hand,
and using to iscsiadm commands for each discovery / session initiation.
Also, as you said, link aggregation makes no sense over crossover and
two different interfaces.

What I'm NOT doing, is LVM. I wanted to go one layer at a time, and
adding LVM was too much for my limited time in here.

So, currently, I have two remaining issues:

1) setup multipath
2) **URGENT**: I've added a second virtual disk and mapped it to my host
(SAN is an MD3000i, host is ubuntu server with 2.6.24-17, i'm waiting
for 2.6.25 which fixes the skb broadcast bug it seems).
If I use hwinfo, I can see the virtual disk (over /dev/sdc and the 2nd
entry, /dev/sdf). Over fdisk, I get NOTHING.

Here's the dmesg output, hwinfo output, and fdisk output:

HWINFO
==
40: SCSI 300.1: 10600 Disk
  [Created at block.222]
  Unique ID: uBVf.EABbh0DH0_1
  SysFS ID: /block/sdc
  SysFS BusID: 3:0:0:1
  SysFS Device Link: /devices/platform/host3/session1/target3:0:0/3:0:0:1
  Hardware Class: disk
  Model: "DELL MD3000i"
  Vendor: "DELL"
  Device: "MD3000i"
  Revision: "0670"
  Serial ID: "84L000I"
  Driver: "sd"
  Device File: /dev/sdc (/dev/sg4)
  Device Files: /dev/sdc,
/dev/disk/by-id/scsi-36001e4f00043a3da04dc4843982f,
/dev/disk/by-path/ip-192.168.131.101:3260-iscsi-iqn.1984-05.com.dell:powervault.6001e4f0004326c1482127e3-lun-1
  Device Number: block 8:32-8:47 (char 21:4)
  Geometry (Logical): CHS 102400/64/32
  Size: 209715200 sectors a 512 bytes
  Drive status: no medium
  Config Status: cfg=new, avail=yes, need=no, active=unknown

That "drive status: no medium" drives me crazy. For comparison, this is
the output for the first virtual disk I created, the one I can access:

41: SCSI 300.0: 10600 Disk
  [Created at block.222]
  Unique ID: R0Fb.EABbh0DH0_1
  SysFS ID: /block/sdb
  SysFS BusID: 3:0:0:0
  SysFS Device Link: /devices/platform/host3/session1/target3:0:0/3:0:0:0
  Hardware Class: disk
  Model: "DELL MD3000i"
  Vendor: "DELL"
  Device: "MD3000i"
  Revision: "0670"
  Serial ID: "84L000I"
  Driver: "sd"
  Device File: /dev/sdb (/dev/sg3)
  Device Files: /dev/sdb,
/dev/disk/by-id/scsi-36001e4f0004326c105b3483e9c7a,
/dev/disk/by-path/ip-192.168.131.101:3260-iscsi-iqn.1984-05.com.dell:powervault.6001e4f0004326c1482127e3-lun-0
  Device Number: block 8:16-8:31 (char 21:3)
  Geometry (Logical): CHS 261/255/63
  Size: 4194304 sectors a 512 bytes
  Config Status: cfg=new, avail=yes, need=no, active=unknown

DMESG
=

end_request: I/O error, dev sdc, sector 209715192
end_request: I/O error, dev sdc, sector 0
Buffer I/O error on device sdc, logical block 0
end_request: I/O error, dev sdc, sector 0
end_request: I/O error, dev sdc, sector 209715192
end_request: I/O error, dev sdc, sector 0
Buffer I/O error on device sdc, logical block 0
end_request: I/O error, dev sdc, sector 0
Buffer I/O error on device sdc, logical block 0
end_request: I/O error, dev sdc, sector 0
end_request: I/O error, dev sdc, sector 209715192
end_request: I/O error, dev sdc, sector 0
Buffer I/O error on device sdc, logical block 0
end_request: I/O error, dev sdc, sector 0
end_request: I/O error, dev sdc, sector 209715192
Buffer I/O error on device sdc, logical block 26214399
end_request: I/O error, dev sdc, sector 209715192
end_request: I/O error, dev sdc, sector 0
Buffer I/O error on device sdc, logical block 0
end_request: I/O error, dev sdc, sector 0
end_request: I/O error, dev sdc, sector 209715192
end_request: I/O error, dev sdc, sector 0

The attach-time messages got lost, but I remember this line:
Dev sdc: unable to read RDB block 0

FDISK
=
[EMAIL PROTECTED]:~# fdisk -l /dev/sdc
[EMAIL PROTECTED]:~# fdisk /dev/sdc

Unable to read /dev/sdc

M

Re: Design questions

2008-06-02 Thread Arturo 'Buanzo' Busleiman

Here is the output you ask for, thanks!

[EMAIL PROTECTED]:~# iscsiadm -m session -P 3
iSCSI Transport Class version 2.0-724
iscsiadm version 2.0-865
Target: iqn.1984-05.com.dell:powervault.6001e4f0004326c1482127e3
Current Portal: 192.168.131.101:3260,1
Persistent Portal: 192.168.131.101:3260,1
**
Interface:
**
Iface Name: default
Iface Transport: tcp
Iface IPaddress: 192.168.131.1
Iface HWaddress: default
Iface Netdev: default
SID: 1
iSCSI Connection State: LOGGED IN
Internal iscsid Session State: NO CHANGE

Negotiated iSCSI params:

HeaderDigest: None
DataDigest: None
MaxRecvDataSegmentLength: 131072
MaxXmitDataSegmentLength: 65536
FirstBurstLength: 8192
MaxBurstLength: 262144
ImmediateData: Yes
InitialR2T: Yes
MaxOutstandingR2T: 1

Attached SCSI devices:

Host Number: 3  State: running
scsi3 Channel 00 Id 0 Lun: 0
Attached scsi disk sdb  State: running
scsi3 Channel 00 Id 0 Lun: 1
Attached scsi disk sdc  State: running
scsi3 Channel 00 Id 0 Lun: 31
Attached scsi disk sdd  State: running
Current Portal: 192.168.130.101:3260,1
Persistent Portal: 192.168.130.101:3260,1
**
Interface:
**
Iface Name: default
Iface Transport: tcp
Iface IPaddress: 192.168.130.2
Iface HWaddress: default
Iface Netdev: default
SID: 2
iSCSI Connection State: LOGGED IN
Internal iscsid Session State: NO CHANGE

Negotiated iSCSI params:

HeaderDigest: None
DataDigest: None
MaxRecvDataSegmentLength: 131072
MaxXmitDataSegmentLength: 65536
FirstBurstLength: 8192
MaxBurstLength: 262144
ImmediateData: Yes
InitialR2T: Yes
MaxOutstandingR2T: 1

Attached SCSI devices:

Host Number: 4  State: running
scsi4 Channel 00 Id 0 Lun: 0
Attached scsi disk sde  State: running
scsi4 Channel 00 Id 0 Lun: 1
Attached scsi disk sdf  State: running
scsi4 Channel 00 Id 0 Lun: 31
Attached scsi disk sdg  State: running


/dev/sdc == /dev/sdf -> the one I can't use

[EMAIL PROTECTED]:~# iscsiadm -m discovery -t st -p 192.168.131.101
192.168.130.101:3260,1 
iqn.1984-05.com.dell:powervault.6001e4f0004326c1482127e3
192.168.130.102:3260,2 
iqn.1984-05.com.dell:powervault.6001e4f0004326c1482127e3
192.168.131.101:3260,1 
iqn.1984-05.com.dell:powervault.6001e4f0004326c1482127e3
192.168.131.102:3260,2 
iqn.1984-05.com.dell:powervault.6001e4f0004326c1482127e3
[fe80::::021e:4fff:fe43:26c3]:3260,1 
iqn.1984-05.com.dell:powervault.6001e4f0004326c1482127e3
[fe80::::021e:4fff:fe43:26c5]:3260,1 
iqn.1984-05.com.dell:powervault.6001e4f0004326c1482127e3
[fe80::::021e:4fff:fe43:a3dc]:3260,2 
iqn.1984-05.com.dell:powervault.6001e4f0004326c1482127e3
[fe80::::021e:4fff:fe43:a3de]:3260,2 
iqn.1984-05.com.dell:powervault.6001e4f0004326c1482127e3


Also, what about this:
Dev sdc: unable to read RDB block 0

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



Re: Design Questions

2008-06-02 Thread Konrad Rzeszutek

On Mon, Jun 02, 2008 at 01:32:15PM -0300, Arturo 'Buanzo' Busleiman wrote:
> 
> On May 30, 10:29 am, Konrad Rzeszutek <[EMAIL PROTECTED]> wrote:
>  > If you run iSCSI on each guests you end up with overhead.
> 
> I decided the same. So I'm running iscsi on the host, and via 
> /dev/disk/by-path/whatever provide the guests with SAN storage.
> 
> Bad thing there, for each virtual disk of the san, I get two /dev 
> entries, so I'm wondering how to setup the multipath over those two 
> /dev/disk/by-path entries (one over each controller).
> 
> I also noticed the IPv^ thing, so I ended up specifying the IP by hand, 
> and using to iscsiadm commands for each discovery / session initiation. 

That works. Bit of a hack. Why not just use the IPv4 on both interfaces?

> Also, as you said, link aggregation makes no sense over crossover and 
> two different interfaces.

Well, you could use load balancing or play with ifaces with your two NICs
and take advantage of the two Ethernet cables from your target. 

What this means is that you can setup the /dev/sdc to go over
one of your NICs, and the /dev/sdf over the other. For that look in
the README file and read up about ifaces. This is the poor man fine-grained
NIC configuration.

Or you can use load balancing where you bond both interfaces in one, but
for that you need a switch..  And the same for link aggregation or
link failure.

But as said before, you are doing cross-over so go with ifaces to
take advantage of setting up two sessions on both NICs.

> 
> What I'm NOT doing, is LVM. I wanted to go one layer at a time, and 
> adding LVM was too much for my limited time in here.
> 
> So, currently, I have two remaining issues:
> 
> 1) setup multipath

That is pretty easy. Just install the package and the two block devices
(or four if you are using a dual-controller) will be /dev/mapper/ on which you can use LVM.

> 2) **URGENT**: I've added a second virtual disk and mapped it to my host 
> (SAN is an MD3000i, host is ubuntu server with 2.6.24-17, i'm waiting 
> for 2.6.25 which fixes the skb broadcast bug it seems).

Huh? What skb broadcast bug?

> If I use hwinfo, I can see the virtual disk (over /dev/sdc and the 2nd 
> entry, /dev/sdf). Over fdisk, I get NOTHING.

fdisk -l doesn't give you data?

> 
> Here's the dmesg output, hwinfo output, and fdisk output:
> 
> HWINFO
> ==
> 40: SCSI 300.1: 10600 Disk
>   [Created at block.222]
>   Unique ID: uBVf.EABbh0DH0_1
>   SysFS ID: /block/sdc
>   SysFS BusID: 3:0:0:1
>   SysFS Device Link: /devices/platform/host3/session1/target3:0:0/3:0:0:1
>   Hardware Class: disk
>   Model: "DELL MD3000i"
>   Vendor: "DELL"
>   Device: "MD3000i"
>   Revision: "0670"
>   Serial ID: "84L000I"
>   Driver: "sd"
>   Device File: /dev/sdc (/dev/sg4)
>   Device Files: /dev/sdc, 
> /dev/disk/by-id/scsi-36001e4f00043a3da04dc4843982f, 
> /dev/disk/by-path/ip-192.168.131.101:3260-iscsi-iqn.1984-05.com.dell:powervault.6001e4f0004326c1482127e3-lun-1
>   Device Number: block 8:32-8:47 (char 21:4)
>   Geometry (Logical): CHS 102400/64/32
>   Size: 209715200 sectors a 512 bytes
>   Drive status: no medium
>   Config Status: cfg=new, avail=yes, need=no, active=unknown
> 
> That "drive status: no medium" drives me crazy. For comparison, this is 

Uh.. don't go crazy. Just install multipath and make sure you have this
configuration entry in mulitpath.conf file:

 device {
vendor  "DELL"
product "MD3000i"
product_blacklist   "Universal Xport"
features"1 queue_if_no_path"
path_checkerrdac
hardware_handler"1 rdac"
path_grouping_policygroup_by_prio
prio"rdac"
failbackimmediate
}   

Keep in mind that depending on what version of multipath you install
you might not have the 'rdac' path checker or that the path priority
program is called differently. Get the latest one and see what config
options you need.

(The above works with SLES10 SP2).

> the output for the first virtual disk I created, the one I can access:
> 
> 41: SCSI 300.0: 10600 Disk
>   [Created at block.222]
>   Unique ID: R0Fb.EABbh0DH0_1
>   SysFS ID: /block/sdb
>   SysFS BusID: 3:0:0:0
>   SysFS Device Link: /devices/platform/host3/session1/target3:0:0/3:0:0:0
>   Hardware Class: disk
>   Model: "DELL MD3000i"
>   Vendor: "DELL"
>   Device: "MD3000i"
>   Revision: "0670"
>   Serial ID: "84L000I"
>   Driver: "sd"
>   Device File: /dev/sdb (/dev/sg3)
>   Device Files: /dev/sdb, 
> /dev/disk/by-id/scsi-36001e4f0004326c105b3483e9c7a, 
> /dev/disk/by-path/ip-192.168.131.101:3260-iscsi-iqn.1984-05.com.dell:powervault.6001e4f0004326c1482127e3-lun-0
>   Device Number: block 8:16-8:31 (char 21:3)
>   Geometry (Logical): CHS 261/255/63
>   Size: 4194304 sectors a 512 bytes
>   Config Status: cfg=new, avail=yes, need=no, active=unknown
> 

That looks wrong. How many controllers do you have?  I wonder if this is 
related to
the other session that di

Re: Design Questions

2008-06-02 Thread Arturo 'Buanzo' Busleiman

On May 30, 10:29 am, Konrad Rzeszutek <[EMAIL PROTECTED]> wrote:
 > If you run iSCSI on each guests you end up with overhead.

I decided the same. So I'm running iscsi on the host, and via 
/dev/disk/by-path/whatever provide the guests with SAN storage.

Bad thing there, for each virtual disk of the san, I get two /dev 
entries, so I'm wondering how to setup the multipath over those two 
/dev/disk/by-path entries (one over each controller).

I also noticed the IPv^ thing, so I ended up specifying the IP by hand, 
and using to iscsiadm commands for each discovery / session initiation. 
Also, as you said, link aggregation makes no sense over crossover and 
two different interfaces.

What I'm NOT doing, is LVM. I wanted to go one layer at a time, and 
adding LVM was too much for my limited time in here.

So, currently, I have two remaining issues:

1) setup multipath
2) **URGENT**: I've added a second virtual disk and mapped it to my host 
(SAN is an MD3000i, host is ubuntu server with 2.6.24-17, i'm waiting 
for 2.6.25 which fixes the skb broadcast bug it seems).
If I use hwinfo, I can see the virtual disk (over /dev/sdc and the 2nd 
entry, /dev/sdf). Over fdisk, I get NOTHING.

Here's the dmesg output, hwinfo output, and fdisk output:

HWINFO
==
40: SCSI 300.1: 10600 Disk
  [Created at block.222]
  Unique ID: uBVf.EABbh0DH0_1
  SysFS ID: /block/sdc
  SysFS BusID: 3:0:0:1
  SysFS Device Link: /devices/platform/host3/session1/target3:0:0/3:0:0:1
  Hardware Class: disk
  Model: "DELL MD3000i"
  Vendor: "DELL"
  Device: "MD3000i"
  Revision: "0670"
  Serial ID: "84L000I"
  Driver: "sd"
  Device File: /dev/sdc (/dev/sg4)
  Device Files: /dev/sdc, 
/dev/disk/by-id/scsi-36001e4f00043a3da04dc4843982f, 
/dev/disk/by-path/ip-192.168.131.101:3260-iscsi-iqn.1984-05.com.dell:powervault.6001e4f0004326c1482127e3-lun-1
  Device Number: block 8:32-8:47 (char 21:4)
  Geometry (Logical): CHS 102400/64/32
  Size: 209715200 sectors a 512 bytes
  Drive status: no medium
  Config Status: cfg=new, avail=yes, need=no, active=unknown

That "drive status: no medium" drives me crazy. For comparison, this is 
the output for the first virtual disk I created, the one I can access:

41: SCSI 300.0: 10600 Disk
  [Created at block.222]
  Unique ID: R0Fb.EABbh0DH0_1
  SysFS ID: /block/sdb
  SysFS BusID: 3:0:0:0
  SysFS Device Link: /devices/platform/host3/session1/target3:0:0/3:0:0:0
  Hardware Class: disk
  Model: "DELL MD3000i"
  Vendor: "DELL"
  Device: "MD3000i"
  Revision: "0670"
  Serial ID: "84L000I"
  Driver: "sd"
  Device File: /dev/sdb (/dev/sg3)
  Device Files: /dev/sdb, 
/dev/disk/by-id/scsi-36001e4f0004326c105b3483e9c7a, 
/dev/disk/by-path/ip-192.168.131.101:3260-iscsi-iqn.1984-05.com.dell:powervault.6001e4f0004326c1482127e3-lun-0
  Device Number: block 8:16-8:31 (char 21:3)
  Geometry (Logical): CHS 261/255/63
  Size: 4194304 sectors a 512 bytes
  Config Status: cfg=new, avail=yes, need=no, active=unknown

DMESG
=

end_request: I/O error, dev sdc, sector 209715192
end_request: I/O error, dev sdc, sector 0
Buffer I/O error on device sdc, logical block 0
end_request: I/O error, dev sdc, sector 0
end_request: I/O error, dev sdc, sector 209715192
end_request: I/O error, dev sdc, sector 0
Buffer I/O error on device sdc, logical block 0
end_request: I/O error, dev sdc, sector 0
Buffer I/O error on device sdc, logical block 0
end_request: I/O error, dev sdc, sector 0
end_request: I/O error, dev sdc, sector 209715192
end_request: I/O error, dev sdc, sector 0
Buffer I/O error on device sdc, logical block 0
end_request: I/O error, dev sdc, sector 0
end_request: I/O error, dev sdc, sector 209715192
Buffer I/O error on device sdc, logical block 26214399
end_request: I/O error, dev sdc, sector 209715192
end_request: I/O error, dev sdc, sector 0
Buffer I/O error on device sdc, logical block 0
end_request: I/O error, dev sdc, sector 0
end_request: I/O error, dev sdc, sector 209715192
end_request: I/O error, dev sdc, sector 0

The attach-time messages got lost, but I remember this line:
Dev sdc: unable to read RDB block 0

FDISK
=
[EMAIL PROTECTED]:~# fdisk -l /dev/sdc
[EMAIL PROTECTED]:~# fdisk /dev/sdc

Unable to read /dev/sdc

Might the disk still be initializing? The Dell client says it's finished...

Thanks!!


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



Re: Design Questions

2008-05-30 Thread Mike Christie

Arturo 'Buanzo' Busleiman wrote:
> On May 28, 2:45 pm, Konrad Rzeszutek <[EMAIL PROTECTED]> wrote:
>  > I am not sure how you are partitioning your space. Does each guest
>  > have an iSCSI target (or LUN) assigned to it? Or is it one big
>  > drive that they run from? Also are you envisioning using this
>  > with LiveMigration (or whatever it is called with your virtualization
>  > system)?
> 
> I'm using Vmware-Server (not ESX, just the free one).
> 
> The guests themselves (the disk where the OS is installed) are stored as 
> vmdk's on a local folder.
> 
> I want to provide application storage for each virtual machine, no 
> shared storage. I have 1.6TB total capacity, and plan on giving each 
> guest as much raid-5 storage space as they need.
> 
> The iscsiadm discovery on my Host reports all available targets, over 
> both interfaces (broadcom and intel).
> 
> So, basicly, I have these doubts / options:
> 
> 1) Login to each target on the host, and add raw disk access to the 
> guests to those host-devices.
> 2) Don't use open-iscsi on the host, but use it on each guest to connect 
> to the targets.
> 
> And the main doubt: how does link aggregation / dualpath fit into those 
> options?
>  
> Also, i find this error:
> 
> [EMAIL PROTECTED]:~# iscsiadm -m node -L all
> Login session [iface: default, target: 
> iqn.1984-05.com.dell:powervault.6001e4f0004326c1482127e3, 
> portal: 192.168.130.102,3260]
> Login session [iface: default, target: 
> iqn.1984-05.com.dell:powervault.6001e4f0004326c1482127e3, 
> portal: fe80::::021e:4fff:fe43:26c3,3260]
> iscsiadm: initiator reported error (4 - encountered connection failure)
> iscsiadm: Could not log into all portals. Err 107.
> 

open-iscsi does not really support link local ipv6 adders yet. There is 
a way around this by setting up a iface for the net inerface (ethX) that 
it is local to then binding it, but it is a little complicated to do.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



Re: Design Questions

2008-05-30 Thread Konrad Rzeszutek

On Thu, May 29, 2008 at 02:35:28PM -0300, Arturo 'Buanzo' Busleiman wrote:
> 
> On May 28, 2:45 pm, Konrad Rzeszutek <[EMAIL PROTECTED]> wrote:
>  > I am not sure how you are partitioning your space. Does each guest
>  > have an iSCSI target (or LUN) assigned to it? Or is it one big
>  > drive that they run from? Also are you envisioning using this
>  > with LiveMigration (or whatever it is called with your virtualization
>  > system)?
> 
> I'm using Vmware-Server (not ESX, just the free one).
> 
> The guests themselves (the disk where the OS is installed) are stored as 
> vmdk's on a local folder.
> 
> I want to provide application storage for each virtual machine, no 
> shared storage. I have 1.6TB total capacity, and plan on giving each 
> guest as much raid-5 storage space as they need.
> 
> The iscsiadm discovery on my Host reports all available targets, over 
> both interfaces (broadcom and intel).
> 
> So, basicly, I have these doubts / options:
> 
> 1) Login to each target on the host, and add raw disk access to the 
> guests to those host-devices.
> 2) Don't use open-iscsi on the host, but use it on each guest to connect 
> to the targets.
> 

If you run iSCSI on each guests you end up with overhead. Each guests will
have to do its own iSCSI packet assembling/disassembling, along with doing
socket operations (TCP, IP assembling) and your target will X-Guests
connections. Each guest would need to run the multipath suite which puts
I/O on the connection every 40 seconds (or less if a failure has occurred).

If on the other hand you make the connection on your host, setup
multipath there, create LVMs and assign them to each your guests you have:
 - less overhead (one OS doing the iSCSI packet assembling/disassembling),
   TCP/IP assembling.
 - one connection to the target. You can even purchase extra two NICs and
   create your own subnet for them and the target so that there is no
   traffic there (except iSCSI).
 - one machine running multipath and you can make it queue I/O from
   place if the network goes down. This will block the guests (you might
   need to change the SCSI timeout in the guests - no idea what registry
   key you need to change for this in Windows).
 - One place to zone out your huge capacity and you can resize them
   as you see fit from one place (using LVMs for guests and you can
   re-size them).

> And the main doubt: how does link aggregation / dualpath fit into those 
> options?

I can't give you an opinion about link aggregation as I don't have that
much experience in this field.

But in regards to multipath you are better of doing it on your host
than on the guest.
>  
> Also, i find this error:
> 
> [EMAIL PROTECTED]:~# iscsiadm -m node -L all
> Login session [iface: default, target: 
> iqn.1984-05.com.dell:powervault.6001e4f0004326c1482127e3, 
> portal: 192.168.130.102,3260]
> Login session [iface: default, target: 
> iqn.1984-05.com.dell:powervault.6001e4f0004326c1482127e3, 
> portal: fe80::::021e:4fff:fe43:26c3,3260]
> iscsiadm: initiator reported error (4 - encountered connection failure)
> iscsiadm: Could not log into all portals. Err 107.

Did you configure your ethX to use IPV6? The second target IP 
is in IPv6 format.

> 
> I'm using crossover cables.

No switch? Then link-aggregation wouldn't matter I would think (since
the ARP requests aren't going to a switch).
> 
> 
> > 

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



Re: Design Questions

2008-05-29 Thread Arturo 'Buanzo' Busleiman

On May 28, 2:45 pm, Konrad Rzeszutek <[EMAIL PROTECTED]> wrote:
 > I am not sure how you are partitioning your space. Does each guest
 > have an iSCSI target (or LUN) assigned to it? Or is it one big
 > drive that they run from? Also are you envisioning using this
 > with LiveMigration (or whatever it is called with your virtualization
 > system)?

I'm using Vmware-Server (not ESX, just the free one).

The guests themselves (the disk where the OS is installed) are stored as 
vmdk's on a local folder.

I want to provide application storage for each virtual machine, no 
shared storage. I have 1.6TB total capacity, and plan on giving each 
guest as much raid-5 storage space as they need.

The iscsiadm discovery on my Host reports all available targets, over 
both interfaces (broadcom and intel).

So, basicly, I have these doubts / options:

1) Login to each target on the host, and add raw disk access to the 
guests to those host-devices.
2) Don't use open-iscsi on the host, but use it on each guest to connect 
to the targets.

And the main doubt: how does link aggregation / dualpath fit into those 
options?
 
Also, i find this error:

[EMAIL PROTECTED]:~# iscsiadm -m node -L all
Login session [iface: default, target: 
iqn.1984-05.com.dell:powervault.6001e4f0004326c1482127e3, 
portal: 192.168.130.102,3260]
Login session [iface: default, target: 
iqn.1984-05.com.dell:powervault.6001e4f0004326c1482127e3, 
portal: fe80::::021e:4fff:fe43:26c3,3260]
iscsiadm: initiator reported error (4 - encountered connection failure)
iscsiadm: Could not log into all portals. Err 107.

I'm using crossover cables.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



Re: Design Questions

2008-05-28 Thread Konrad Rzeszutek

On Wed, May 28, 2008 at 01:15:36PM -0300, Arturo 'Buanzo' Busleiman wrote:
> 
> Arturo 'Buanzo' Busleiman wrote:
> > So, the obvious question here: I want to store the data in the SAN. 
> > Should I get my sessions running in the host, or inside each virtual 
> > machine?
> If this is not the correct group to ask this question, I'd gladly accept 
> suggestions for other groups! :)

I am not sure how you are partitioning your space. Does each guest
have an iSCSI target (or LUN) assigned to it? Or is it one big
drive that they run from? Also are you envisioning using this
with LiveMigration (or whatever it is called with your virtualization
system)?

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



Re: Design Questions

2008-05-28 Thread Arturo 'Buanzo' Busleiman

Arturo 'Buanzo' Busleiman wrote:
> So, the obvious question here: I want to store the data in the SAN. 
> Should I get my sessions running in the host, or inside each virtual 
> machine?
If this is not the correct group to ask this question, I'd gladly accept 
suggestions for other groups! :)


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---