Re: [zfs-discuss] Re: Thumper Origins Q

2007-01-24 Thread Shannon Roddy
Frank Cusack wrote:
 On January 24, 2007 9:40:41 AM -0800 Richard Elling
 [EMAIL PROTECTED] wrote:
 Peter Eriksson wrote:
 Yes please. Now give me a fairly cheap (but still quality) FC-attached
 JBOD  utilizing SATA/SAS disks and I'll be really happy! :-)

 ... with write cache and dual redundant controllers?  I think we call
 that
 the Sun StorageTek 3511.
 
 Ah but the 3511 JBOD is not supported for direct attach to a host, nor
 is it
 supported for attachment to a SAN.  You have to have a 3510 or 3511 with
 RAID controller to use the 3511 JBOD.  The RAID controller is pretty pricey
 on these guys.  $5k each IIRC.


I started looking into the 3511 for a ZFS system and just about
immediately stopped considering it for this reason.  If it is not
supported in JBOD, then I might as well go get a third party JBOD at the
same level of support.


 You can get a 4Gb FC-SATA RAID with 12*750gb drives for about $10k
 from third parties.  I doubt we'll ever see that from Sun if for no
 other reason just due to the drive markups.  (Which might be justified
 based on drive qualification; I'm not making any comment as to whether
 the markup is warranted or not, just that it exists and is obscene.)
 

Yep.  I went with a third party FC/SATA unit which has been flawless as
a direct attach for my ZFS JBOD system.  Paid about $0.70/GB.  And I
still have enough money left over this year to upgrade my network core.
 If I would have gone with Sun, I wouldn't be able to push as many bits
across my network.

I just don't know how people can afford Sun storage, or even if they
can, what drives them to pay such premiums.

Sun is missing out on lots of lower end storage, but perhaps that is by
design.  I am a small shop by many standards, but I would have spent
tens of thousands over the last few years with Sun if they had
reasonably priced storage.  shrug  I just need a place to put my bits.
 Doesn't need to be the fastest, bleeding edge stuff.  Just a bucket
that performs reasonably, and preferably one that I can use with ZFS.

-Shannon

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: Thumper Origins Q

2007-01-24 Thread Shannon Roddy
Ben Gollmer wrote:
 On Jan 24, 2007, at 12:37 PM, Shannon Roddy wrote:
 
 I went with a third party FC/SATA unit which has been flawless as
 a direct attach for my ZFS JBOD system.  Paid about $0.70/GB.
 
 What did you use, if you don't mind my asking?
 

Arena Janus 6641.  Turns out I underestimated what I paid per GB.  I
went back and dug up the invoice and I paid just under $1/GB.  My memory
was a little off on the 750 GB drive prices.  I used an LSI Logic FC
card that was listed on the Solaris Ready page, and I am using the LSI
Logic driver.

http://www.sun.com/io_technologies/vendor/lsi_logic_corporation.html

Works fine for our purposes, but again, we don't need screaming bleeding
edge performance either.

-Shannon

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] External drive enclosures + Sun Server for mass storage

2007-01-20 Thread Shannon Roddy
Frank Cusack wrote:

 thumper (x4500) seems pretty reasonable ($/GB).

 -frank


I am always amazed that people consider thumper to be reasonable in
price.  450% or more markup per drive from street price in July 2006
numbers doesn't seem reasonable to me, even after subtracting the cost
of the system.  I like the x4500, I wish I had one.  But, I can't pay
what Sun wants for it.  So, instead, I am stuck buying lower end Sun
systems and buying third party SCSI/SATA JBODs.  I like Sun.  I like
their products, but I can't understand their storage pricing most of the
time.

-Shannon

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: How much do we really want zpool remove?

2007-01-18 Thread Shannon Roddy
Celso wrote:
 Both removing disks from a zpool and modifying raidz arrays would be very 
 useful. 

Add my vote for this.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Sol 10 x86_64 intermittent SATA device locks up server

2006-08-28 Thread Shannon Roddy
Hello All,

I have an issue where I have two SATA cards with 5 drives each in one
zfs pool.  The issue is one of the devices has been intermittently
failing.  The problem is that the entire box seems to lock up on
occasion when this happens.  I currently have the SATA cable to that
device disconnected in the hopes that the box will at least stay up for
now.  This is a new build that I am burning in in the hopes that it
will serve as some NFS space for our solaris boxen.  Below is the output
from zpool status -vx

bash-3.00# zpool status
  pool: tank
 state: DEGRADED
status: One or more devices could not be opened.  Sufficient replicas
exist for
the pool to continue functioning in a degraded state.
action: Attach the missing device and online it using 'zpool online'.
   see: http://www.sun.com/msg/ZFS-8000-D3
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
tankDEGRADED 0 0 0
  raidz ONLINE   0 0 0
c1t1d0  ONLINE   0 0 0
c1t2d0  ONLINE   0 0 0
c1t3d0  ONLINE   0 0 0
c1t4d0  ONLINE   0 0 0
c1t5d0  ONLINE   0 0 0
  raidz DEGRADED 0 0 0
c2t1d0  ONLINE   0 0 0
c2t2d0  ONLINE   0 0 0
c2t3d0  ONLINE   0 0 0
c2t4d0  ONLINE   0 0 0
c2t5d0  UNAVAIL 4263 0  cannot open

errors: No known data errors


And below is some info from /var/adm/messages:

Aug 29 12:42:08 localhost marvell88sx: [ID 812917 kern.warning] WARNING:
marvell88sx1: error on port 5:
Aug 29 12:42:08 localhost marvell88sx: [ID 702911 kern.notice]   SError
interrupt
Aug 29 12:42:08 localhost marvell88sx: [ID 702911 kern.notice]   link
data receive error - crc
Aug 29 12:42:08 localhost marvell88sx: [ID 702911 kern.notice]   link
data receive error - state
Aug 29 12:42:08 localhost marvell88sx: [ID 812917 kern.warning] WARNING:
marvell88sx1: error on port 5:
Aug 29 12:42:08 localhost marvell88sx: [ID 702911 kern.notice]   device
error
Aug 29 12:42:08 localhost marvell88sx: [ID 702911 kern.notice]   SError
interrupt
Aug 29 12:42:08 localhost marvell88sx: [ID 702911 kern.notice]   EDMA
self disabled
Aug 29 12:43:08 localhost marvell88sx: [ID 812917 kern.warning] WARNING:
marvell88sx1: error on port 5:
Aug 29 12:43:08 localhost marvell88sx: [ID 702911 kern.notice]   device
disconnected
Aug 29 12:43:08 localhost marvell88sx: [ID 702911 kern.notice]   device
connected
Aug 29 12:43:08 localhost marvell88sx: [ID 702911 kern.notice]   SError
interrupt
Aug 29 12:43:10 localhost marvell88sx: [ID 812917 kern.warning] WARNING:
marvell88sx1: error on port 5:
Aug 29 12:43:10 localhost marvell88sx: [ID 702911 kern.notice]   SError
interrupt
Aug 29 12:43:10 localhost marvell88sx: [ID 702911 kern.notice]   link
data receive error - crc
Aug 29 12:43:10 localhost marvell88sx: [ID 702911 kern.notice]   link
data receive error - state
Aug 29 12:43:11 localhost marvell88sx: [ID 812917 kern.warning] WARNING:
marvell88sx1: error on port 5:
Aug 29 12:43:11 localhost marvell88sx: [ID 702911 kern.notice]   device
error
Aug 29 12:43:11 localhost marvell88sx: [ID 702911 kern.notice]   SError
interrupt
Aug 29 12:43:11 localhost marvell88sx: [ID 702911 kern.notice]   EDMA
self disabled
Aug 29 12:44:10 localhost marvell88sx: [ID 812917 kern.warning] WARNING:
marvell88sx1: error on port 5:
Aug 29 12:44:10 localhost marvell88sx: [ID 702911 kern.notice]   device
disconnected
Aug 29 12:44:10 localhost marvell88sx: [ID 702911 kern.notice]   device
connected
Aug 29 12:44:10 localhost marvell88sx: [ID 702911 kern.notice]   SError
interrupt


My question is, shouldn't it be possible for the solaris to stay up even
with an intermittent drive error?  I have a replacement drive and cable
on order to see if that fixes the problem.

Thanks!

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Good 8 or 16 port x86 PCI SATA card

2006-07-21 Thread Shannon Roddy
Hi All,

I have looked on the HCL list for Sol 10 x86 without much luck.  I am
looking for a 8 or 16 port SATA card for a JBOD Sol 10 x86 ZFS
installation.  Anyone know of one that is well supported in Sol 10?  I
am starting to do some testing with an LSI Logic 320-XLP SATA RAID card,
but so far as I can tell, it does not want to do JBOD.  For several
reasons, I would rather have ZFS handle the RAID.

Any recommendations would be appreciated.  I have a 16 bay triple
redundant PS case here that I would really like to use with ZFS.
Unfortunately my CPU is 32 bit, so that may have to change.

Thanks,
Shannon

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Solaris 10 6/06 now available for download

2006-06-27 Thread Shannon Roddy


 Solaris 10u2 was released today.  You can now download it from here:

 http://www.sun.com/software/solaris/get.jsp
   

Does anyone know if ZFS is included in this release?  One of my local
Sun reps said it did not make it into the u2 release, though I have
heard for ages that 6/06 would include it.

Thanks!

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss