To repeat what some others have said, yes, Solaris seems to handle an iSCSI
device going offline in that it doesn't panick and continues working once
everything has timed out.
However that doesn't necessarily mean it's ready for production use. ZFS will
hang for 3 mins (180 seconds) waiting
Ross wrote:
To repeat what some others have said, yes, Solaris seems to handle an iSCSI
device going offline in that it doesn't panick and continues working once
everything has timed out.
However that doesn't necessarily mean it's ready for production use. ZFS
will hang for 3 mins (180
On Mon, 7 Apr 2008, Ross wrote:
However that doesn't necessarily mean it's ready for production use.
ZFS will hang for 3 mins (180 seconds) waiting for the iSCSI client
to timeout. Now I don't know about you, but HA to me doesn't mean
Highly Available, but with occasional 3 minute breaks.
Crazy question here... but has anyone tried this with say, a QLogic
hardware iSCSI card? Seems like it would solve all your issues.
Granted, they aren't free like the software stack, but if you're trying
to setup an HA solution, the ~$800 price tag per card seems pretty darn
reasonable
On Mon, Apr 7, 2008 at 10:40 AM, Christine Tran [EMAIL PROTECTED]
wrote:
Crazy question here... but has anyone tried this with say, a QLogic
hardware iSCSI card? Seems like it would solve all your issues. Granted,
they aren't free like the software stack, but if you're trying to setup an
Hi All ;
We are running latest Solaris 10 a X4500 Thumper. We defined a test iSCSI
Lun. Out put below
Target: AkhanTemp/VM
iSCSI Name: iqn.1986-03.com.sun:02:72406bf8-2f5f-635a-f64c-cb664935f3d1
Alias: AkhanTemp/VM
Connections: 0
ACL list:
TPGT list:
LUN
Mertol Ozyoney wrote:
Hi All ;
There are a set of issues being looked at that prevent the VMWare ESX
server from working with the Solaris iSCSI Target.
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6597310
At this time there is no target date when this issues will
Ross Smith wrote:
Which again is unacceptable for network storage. If hardware raid
controllers took over a minute to timeout a drive network admins would
be in uproar. Why should software be held to a different standard?
You need to take a systems approach to analyzing these things.
For
Thanks James ;
The problem is nearly identical with mine.
When we had 2 LUN's vmware tried to multipath over them . I think this is a
bug inside VMWare as it thinks that two LUN 0 are same. I think I can fool
it setting up targets with different LUN numbers.
After I figured out this, I
I've been using ZFS on my home media server for about a year now. There's a lot
I like about Solaris, but the rest of the computers in my house are Macs. Now
that the Mac has experimental read/write support for ZFS, I'd like to migrate
my zpool to my Mac Pro. I primarily use the machine to
Jeff,
On Mon, Mar 31, 2008 at 9:01 AM, Jeff Bonwick [EMAIL PROTECTED] wrote:
Peter,
That's a great suggestion. And as fortune would have it, we have the
code to do it already. Scrubbing in ZFS is driven from the logical
layer, not the physical layer. When you scrub a pool, you're
Some time ago I experienced the same issue.
Only 1 target could be connected from an esx host. Others were shown as
alternative paths to that target.
If I'm reminding correctly I thought I read on a forum it has something to do
with the disks serial number.
Normally every single (i)scsi disk
On Mon, Apr 7, 2008 at 12:46 PM, David Loose [EMAIL PROTECTED] wrote:
The problem is that I upgraded to Solaris nv84 a while ago and bumped my
zpool to version 9 (I think) at that time. The Macintosh guys only support up
to version 8. There doesn't seem to be too much activity on the ZFS
On Apr 7, 2008, at 1:46 PM, David Loose wrote:
my Solaris samba shares never really played well with iTunes.
Another approach might be to stick with Solaris on the server, and
run netatalk netatalk.sourceforge.net instead of SAMBA (or, you
know your macs can speak NFS ;).
--
Keith H.
14 matches
Mail list logo