Re: iSCSI target recommendation

2010-03-29 Thread Joe Landman

An Oneironaut wrote:

Hey all.  Could anyone suggest a good NAS that has about 2 to 6TB of
storage which is under 4k?  its hard to find out whether these people
have tested with open-iscsi or not.  So I was hoping some of you out
there who had used a storage device within this range would have some
opinions.  Please tell me if you have any suggestions.


If you don't mind a vendor reply, have a look at 
http://scalableinformatics.com/deltav


Not meant as a commercial, so skip/delete if you object to commercial 
content.  And blame/flame me offline if you do object vociferously.


All units are tested/functional with open-iscsi initiator, ietd and scst 
targets.  Linux OS based, providing simultaneous NAS (file targets: NFS, 
CIFS/SMB, ...), SAN (block: iSCSI targets/inititors, SRP, ...) over 
default transports of Gigabit, with 10GbE and Infiniband optional.  We 
get very good performance over GbE using open-iscsi initiator to these 
devices, and also excellent performance running these with the 
open-iscsi initiator, connecting to other targets, able to saturate the 
multiple GbE NICs without pain.


We don't list pricing for lower end units, but they would meet your 
pricing target.




Thanks,

JD




--
Joseph Landman, Ph.D
Founder and CEO
Scalable Informatics, Inc.
email: land...@scalableinformatics.com
web  : http://scalableinformatics.com
   http://scalableinformatics.com/jackrabbit
phone: +1 734 786 8423 x121
fax  : +1 866 888 3112
cell : +1 734 612 4615

--
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.



Re: Debian Lenny and md3000i with modified RDAC

2010-03-29 Thread Kristoffer Egefelt

 Under load, I'm seeing occasional controller resets, and then some i/o
 timeouts on disks owned by the controller being reset.


 What kind of timeouts are you seeing? Are they on the initiator and if so
 can you send the /var/log/messages? If there are nop/ping iscsi timeouts
 then it may be a bug, where open-iscsi was too agressive in determining if
 there was a timeout.


On the servers, nothing other than no access to storage for 30-60 seconds.
But the MD3000i logs this - which seems to be a controller resetting itself,
the interesting parts being the Physical Disk path redundancy lost which
should be related to the opensource rdac driver, according to Dell. OS
connected is Debian, which Dell for some reason chose not to support ;-)

*10-03-16 10:43:31** ** Physical Disk** **Enclosure 0, Slot 14** **Physical
Disk path redundancy restored*
*10-03-16 10:43:31** ** Physical Disk** **Enclosure 0, Slot 13** **Physical
Disk path redundancy restored*
*10-03-16 10:43:31** ** Physical Disk** **Enclosure 0, Slot 12** **Physical
Disk path redundancy restored*
10-03-16 10:43:24  Controller Module RAID Controller Module in slot 0 Alternate
RAID controller module checked in late
10-03-16 10:43:10  Sensor Enclosure 0, Slot 1 Temperature changed to optimal
10-03-16 10:42:42  Controller Module RAID Controller Module in slot 0 All
channel reset detected
10-03-16 10:42:42  Controller Module RAID Controller Module in slot 0 AEN
posted for recently logged event
10-03-16 10:42:36  Component (EMM, GBIC/SFP, Power Supply, or Fan) Enclosure
0, Slot 1 All connections established through wide port
10-03-16 10:42:36  Component (EMM, GBIC/SFP, Power Supply, or Fan) Enclosure
0, Slot 1 Single connection established through previously failed wide port
10-03-16 10:42:36  Component (EMM, GBIC/SFP, Power Supply, or Fan) Enclosure
0, Slot 0 All connections established through wide port
10-03-16 10:42:36  Component (EMM, GBIC/SFP, Power Supply, or Fan) Enclosure
0, Slot 0 Single connection established through previously failed wide port
10-03-16 10:43:40  Controller Module RAID Controller Module in slot 1
Start-of-day
routine completed
10-03-16 10:43:29  Controller Module RAID Controller Module in slot 1 Cache
mirroring on RAID controller modules not synchronized
10-03-16 10:43:25  Initiator Host-side: RAID controller module in slot 0,
port - ISCSI carrier has been detected
10-03-16 10:43:24  Initiator Host-side: RAID controller module in slot 0,
port - ISCSI carrier has been detected
10-03-16 10:43:21  Target iqn.2000-04.com.qlogic:qla4052c.fs10515a02997.1 iSCSI
interface restarted
10-03-16 10:43:17  Pack Enclosure 0 RAID Controller Module cache battery is
fully charged
10-03-16 10:43:17  Controller Module Firmware None Premium feature enabled
10-03-16 10:43:17  Controller Module Firmware None Premium feature enabled
10-03-16 10:43:17  Controller Module Firmware None Premium feature enabled
10-03-16 10:43:17  Controller Module Firmware None Premium feature enabled
10-03-16 10:43:15  Controller Module RAID Controller Module in slot 1 RAID
Controller Module reset
10-03-16 10:43:04  Controller Module RAID Controller Module in slot 1
Start-of-day
routine begun
*10-03-16 10:42:33** ** Disk** **Enclosure 0, Slot 14** **Physical Disk path
redundancy lost*
*10-03-16 10:42:33** ** Disk** **Enclosure 0, Slot 13** **Physical Disk path
redundancy lost*
*10-03-16 10:42:33** ** Disk** **Enclosure 0, Slot 12** **Physical Disk path
redundancy lost*
10-03-16 10:42:33  Controller Module RAID Controller Module in slot 0 AEN
posted for recently logged event
10-03-16 10:42:20  Component (EMM, GBIC/SFP, Power Supply, or Fan) Enclosure
0, Slot 0 All connections established through wide port
10-03-16 10:42:19  Component (EMM, GBIC/SFP, Power Supply, or Fan) Enclosure
0, Slot 0 Single connection established through previously failed wide port
10-03-16 10:42:02  Controller Module RAID Controller Module in slot 0 Mode
select for redundant RAID controller module page 2C received
10-03-16 10:42:02  Controller Module RAID Controller Module in slot 1 RAID
Controller Module placed online
10-03-16 10:42:01  Controller Module RAID Controller Module in slot 0 Unwritten
data/consistency recovered from cache
10-03-16 10:42:01  Controller Module RAID Controller Module in slot 0 Unwritten
data/consistency recovered from cache
10-03-16 10:42:01  Controller Module RAID Controller Module in slot 0 Unwritten
data/consistency recovered from cache
10-03-16 10:41:59  Component (EMM, GBIC/SFP, Power Supply, or Fan) Enclosure
0, Slot 0 Degraded wide port becomes failed
10-03-16 10:41:59  Component (EMM, GBIC/SFP, Power Supply, or Fan) Enclosure
0, Slot 0 Optimal wide port becomes degraded
10-03-16 10:41:59  Component (EMM, GBIC/SFP, Power Supply, or Fan) Enclosure
0, Slot 0 All connections established through wide port
10-03-16 10:41:59  Component (EMM, GBIC/SFP, Power Supply, or Fan) Enclosure
0, Slot 0 Single connection established through previously failed wide port
10-03-16 

iSCSI lvm and reboot

2010-03-29 Thread Raimund Sacherer

I am new to iSCSI and FilerSystems, but I am evaluating if it makes sense for 
our clients.

So I set up my testlab and created a KVM Server with 3 instances, 

2 x Ubuntu (one for Zimbra LDAP, one for Zimbra Mailserver)
1 x Ubuntu (with some OpenVZ virtual machines in it)

These 3 KVM instances have RAW LVM disks which are on a Volume in the iSCSI 
Filer.

I tried yesterday to reboot the filer, without doing anything to the KVM 
Machines, to simulate outage/human error.

The reboot is fine and the iSCSI targets get exposed, but the KVM Servers have 
their filesystems mounted readonly.

Is there any way to get the LVM volumes on the KVM Server machine or inside the 
KVM Virtualized servers back to R/W mode?

I could not figure it out, vgchange -aly storage on the KVM server did not 
change anything. 

Is the only way to handle this situation with a cold-reset of the KVM Guests? 
As a reboot command was taking very long, i guess because of the RO mounts i 
had to reset them.

A push in the right direction regarding, e.g. doku, or any help is very 
appreciated.


Thank you,
best

-
RunSolutions
 Open Source It Consulting
-
Email: r...@runsolutions.com

Parc Bit - Centro Empresarial Son Espanyol
Edificio Estel - Local 3D
07121 -  Palma de Mallorca
Baleares

-- 
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.



RE: iSCSI lvm and reboot

2010-03-29 Thread Geoff Galitz


Take a look at configuring multipating between the KVM server and the
fileserver.  You can take advantage of failover or just simply block IO
until the fileserver returns to service, and then everything should resume
normally.

It works for me.

-geoff


-
Geoff Galitz
Blankenheim NRW, Germany
http://www.galitz.org/
http://german-way.com/blog/


 -Original Message-
 From: open-iscsi@googlegroups.com [mailto:open-is...@googlegroups.com] On
 Behalf Of Raimund Sacherer
 Sent: Sonntag, 28. März 2010 10:29
 To: open-iscsi
 Subject: iSCSI lvm and reboot
 
 
 I am new to iSCSI and FilerSystems, but I am evaluating if it makes sense
 for our clients.
 
 So I set up my testlab and created a KVM Server with 3 instances,
 
 2 x Ubuntu (one for Zimbra LDAP, one for Zimbra Mailserver)
 1 x Ubuntu (with some OpenVZ virtual machines in it)
 
 These 3 KVM instances have RAW LVM disks which are on a Volume in the
 iSCSI Filer.
 
 I tried yesterday to reboot the filer, without doing anything to the KVM
 Machines, to simulate outage/human error.
 
 The reboot is fine and the iSCSI targets get exposed, but the KVM Servers
 have their filesystems mounted readonly.
 
 Is there any way to get the LVM volumes on the KVM Server machine or
 inside the KVM Virtualized servers back to R/W mode?
 
 I could not figure it out, vgchange -aly storage on the KVM server did
 not change anything.
 
 Is the only way to handle this situation with a cold-reset of the KVM
 Guests? As a reboot command was taking very long, i guess because of the
 RO mounts i had to reset them.
 
 A push in the right direction regarding, e.g. doku, or any help is very
 appreciated.
 
 
 Thank you,
 best
 
 -
 RunSolutions
  Open Source It Consulting
 -
 Email: r...@runsolutions.com
 
 Parc Bit - Centro Empresarial Son Espanyol
 Edificio Estel - Local 3D
 07121 -  Palma de Mallorca
 Baleares
 
 --
 You received this message because you are subscribed to the Google Groups
 open-iscsi group.
 To post to this group, send email to open-is...@googlegroups.com.
 To unsubscribe from this group, send email to open-
 iscsi+unsubscr...@googlegroups.com.
 For more options, visit this group at http://groups.google.com/group/open-
 iscsi?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.



Re: iSCSI target recommendation

2010-03-29 Thread Pasi Kärkkäinen
On Sun, Mar 28, 2010 at 08:52:42AM -0400, Joe Landman wrote:
 An Oneironaut wrote:
 Hey all.  Could anyone suggest a good NAS that has about 2 to 6TB of
 storage which is under 4k?  its hard to find out whether these people
 have tested with open-iscsi or not.  So I was hoping some of you out
 there who had used a storage device within this range would have some
 opinions.  Please tell me if you have any suggestions.

 If you don't mind a vendor reply, have a look at  
 http://scalableinformatics.com/deltav

 Not meant as a commercial, so skip/delete if you object to commercial  
 content.  And blame/flame me offline if you do object vociferously.


Don't worry, the URL didn't work ;)

404 Not Found Error: No content found at the requested URL

Sorry, no content was found at the requested path - it's possible that you've 
requested this page in error.

-- Pasi

-- 
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.



Re: iSCSI lvm and reboot

2010-03-29 Thread Mike Christie

On 03/29/2010 12:51 PM, Raimund Sacherer wrote:

Hi Geoff,

I was under the impression that for multi-path you need either 2 distinct 
connections and you can have failover one connection fails, or 2 
block-syncronized filers which helps you out if one filer dies.



You can do dm-mulitpath with only one path. It would basically give you 
an extra layer to retry and queue IO at in case something happened. The 
scsi/iscsi layer only gives you 5 retries.


--
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.



Re: iSCSI lvm and reboot

2010-03-29 Thread Mike Christie

On 03/28/2010 03:28 AM, Raimund Sacherer wrote:


I am new to iSCSI and FilerSystems, but I am evaluating if it makes sense for 
our clients.

So I set up my testlab and created a KVM Server with 3 instances,

2 x Ubuntu (one for Zimbra LDAP, one for Zimbra Mailserver)
1 x Ubuntu (with some OpenVZ virtual machines in it)

These 3 KVM instances have RAW LVM disks which are on a Volume in the iSCSI 
Filer.

I tried yesterday to reboot the filer, without doing anything to the KVM 
Machines, to simulate outage/human error.

The reboot is fine and the iSCSI targets get exposed, but the KVM Servers have 
their filesystems mounted readonly.



In this type of setup you will want high noop values (or maybe just turn 
them off) and a high replacement_timeout value. So in the iscsid.conf 
for the iniatitors do something like:


# When the iscsi layer detects it cannot reach the target, it will stop 
IO and if it cannot reconnect to the target within the timeout below it 
will fail IO. This will cause FSs to be remounted read only or for you 
to get IO errors. So set this to some value that is long enough to 
handle your failure.

node.session.timeo.replacement_timeout = 600

# you can just turn these off
node.conn[0].timeo.logout_timeout = 0
node.conn[0].timeo.noop_out_interval = 0


Some else said to use dm-multipath and for that you could set the 
queue_if_no_path to 1 or set no_path_retry to a high value. This will 
basically just catch the iscsi/scsi layer and add extra requeuing 
capabilities. queue_if_no_path will internally queue IO until the path 
comes back or until the system runs out of memory or dies in some other way.


--
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.