Re: [zfs-discuss] j4200 drive carriers

2009-02-01 Thread Frank Cusack
 nevermind, i will just get a Promise array.

 Don't.  I don't normally like to badmouth vendors, but my experience
 with Promise was one of the worst in my career, for reasons that should
 be relevant other ZFS-oriented customers.

recommendations for an alternative?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Oracle raw volumes

2009-02-01 Thread Stefan Parvu
one question related with this: would KAIO be supported on such configuration ?

At least trying to open /dev/zvol/rdsk/datapool/master where master is defined
as:

# zpool create -f datapool mirror c1t1d0 c2t0d0
# zfs create -V 1gb datapool/master
# ls -lrt /dev/zvol/rdsk/datapool/master
lrwxrwxrwx 1 root root 39 Feb 1 18:29 /dev/zvol/rdsk/datapool/master - 
../../../../devices/pseudo/z...@0:3c,raw


resulted with:
# truss -t kaio,lwp_create ./test-kaio /dev/zvol/rdsk/datapool/master
kaio(AIOINIT) = 0
lwp_create(0x08046860, LWP_DAEMON|LWP_DETACHED|LWP_SUSPENDED, 0x0804685C) = 2
/2: lwp_create() (returning as new lwp ...) = 0
/1: kaio(AIOREAD, 5, 0x08046C2C, 1024, 0, 0x0804702C) Err#81 EBADFD
/1: lwp_create(0x08046820, LWP_DAEMON|LWP_DETACHED|LWP_SUSPENDED, 0x0804681C) = 
3
/3: lwp_create() (returning as new lwp ...) = 0
/1: lwp_create(0x08046820, LWP_DAEMON|LWP_DETACHED|LWP_SUSPENDED, 0x0804681C) = 
4
/4: lwp_create() (returning as new lwp ...) = 0
/1: lwp_create(0x08046820, LWP_DAEMON|LWP_DETACHED|LWP_SUSPENDED, 0x0804681C) = 
5
/5: lwp_create() (returning as new lwp ...) = 0
/1: lwp_create(0x08046820, LWP_DAEMON|LWP_DETACHED|LWP_SUSPENDED, 0x0804681C) = 
6
/6: lwp_create() (returning as new lwp ...) = 0
/1: lwp_create(0x08046820, LWP_DAEMON|LWP_DETACHED|LWP_SUSPENDED, 0x0804681C) = 
7
/7: lwp_create() (returning as new lwp ...) = 0
/4: kaio(AIONOTIFY) = 0
/1: kaio(AIOWAIT, 0x, 0) = 1
aio succeeded


Comments ? The original question came from a real setup using Sybase which 
spits a lot of:

  733/1:   4137604   6  1 kaio(0x2, 0x, 0x1)  = 
-1 Err#22
   733/1:   4137611   7  3 pollsys(0x110B3A8, 0x1A, 0x10009FE6BF0)  
 = 1 0
   733/1:   4137614   3  0 kaio(0x2, 0x, 0x1)   
= -1 Err#22
   733/1:   4137622   5  2 recv(0x17, 0x100044F20F0, 0x800)   = 
109 0
   733/1:   4137626   3  0 kaio(0x2, 0x, 0x1)   
= -1 Err#22
   733/1:   4137631   6  2 pollsys(0x110B3A8, 0x1A, 0x10009FE6BF0)  
 = 0 0
   733/1:   4137633   3  0 kaio(0x2, 0x, 0x1)   
= -1 Err#22
   733/1:   4137636   4  1 pollsys(0x110B3A8, 0x1A, 0x10009FE6BF0)  
 = 0 0
   733/1:   4137637   3  0 kaio(0x2, 0x, 0x1)   
= -1 Err#22
   733/1:   4137641   4  1 pollsys(0x110B3A8, 0x1A, 0x10009FE6BF0)  
 = 0 0
   733/1:   4137642   3  0 kaio(0x2, 0x, 0x1)   
= -1 Err#22
   733/1:   4137646   4  1 pollsys(0x110B3A8, 0x1A, 0x10009FE6BF0)  
 = 0 0
   733/1:   4137647   3  0 kaio(0x2, 0x, 0x1)   
= -1 Err#22
   733/1:   4137650   4  1 pollsys(0x110B3A8, 0x1A, 0x10009FE6BF0)  
 = 0 0
   733/1:   4137652   3  0 kaio(0x2, 0x, 0x1)   
= -1 Err#22
   733/1:   4137655   4  1 pollsys(0x110B3A8, 0x1A, 0x10009FE6BF0)  
 = 0 0
   733/1:   4137657   4  0 kaio(0x2, 0x, 0x1)   
= -1 Err#22
   733/1:   4137660   4  1 pollsys(0x110B3A8, 0x1A, 0x10009FE6BF0)  
 = 0 0


Thanks,
Stefan

test-kaio.c 
http://sunsite.uakom.sk/sunworldonline/swol-07-1998/swol-07-insidesolaris.html
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] j4200 drive carriers

2009-02-01 Thread Tim
On Sun, Feb 1, 2009 at 10:30 AM, Frank Cusack fcus...@fcusack.com wrote:

  nevermind, i will just get a Promise array.
 
  Don't.  I don't normally like to badmouth vendors, but my experience
  with Promise was one of the worst in my career, for reasons that should
  be relevant other ZFS-oriented customers.

 recommendations for an alternative?


I would imagine something like this:
http://www.pc-pitstop.com/sas_cables_enclosures/scsase16.asp

or this:
http://www.pc-pitstop.com/sas_cables_enclosures/scsas16rm.asp

would get the job don.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Zpool corrupt, crashes system when imported -- one last try

2009-02-01 Thread David Dyer-Bennet
Sorry for following up to myself.

It was suggested off-list that I try to see if the slightly older code in
the Belenix liveCD would maybe work with my pool.  Nope -- because I
upgraded my pool back when I thought OpenSolaris 2008.11 was going to
work, so it's too new a version for the code on that liveCD.  I *did* try
the 2008.11 liveCD, in addition to my installation, and they both crash
when I import the pool.

And I'm not going to do anything drastic and permanent today after all. 
Hoping for insight from somebody!

On Sat, January 31, 2009 22:51, David Dyer-Bennet wrote:
 As I've said, I've got this ZFS pool, with 450GB of my data in it, that
 crashes the system when I import it.  I've now got a dozen or so log
 entries on this, having finally gotten it to happen in a controlled enough
 environment that I can get at the log files and transfer them somewhere I
 can access them.  They all appear to be exactly the same thing.

 This crash happens nearly instantly after I do a zpool import -f zp1.
 Since the stack traceback suggests it's involved in a scrub, and indeed I
 do think it was scrubbing when this first started happening, I've tried
 doing zpool import -f zp1; zpool scrub -s zp1 in hopes that it will get
 in in time to stop the hypothetical scrub.

 I'm running OpenSolaris 2008.11.

 I'd like to rescue this pool.  If I can't, though, I've got to destroy it
 and restore from backup (which shouldn't make things much worse than they
 already are; my backups are in decent shape so far as I know).  And I've
 spent far far too much time on this (busy at work, other things at home,
 so this has dragged on interminably).  And I'd love to provide useful
 information to anybody interested in finding and fixing whatever is wrong
 in the code that left me in this position, of course.  Nobody has
 responded to my image of the stack traceback from yesterday, but I'm
 hoping that now that I've managed to get more information (including the
 stuff before the stack traceback), somebody may be able to do something
 with it.  I do also seem to have a dump, if that's any use to anybody:

 Jan 31 22:34:11 fsfs genunix: [ID 111219 kern.notice] dumping to
 /dev/zvol/dsk/rpool/dump, offset 65536, content: kernel
 Jan 31 22:34:21 fsfs genunix: [ID 409368 kern.notice] ^M100% done: 197356
 pages dumped, compression ratio 3.20,
 Jan 31 22:34:21 fsfs genunix: [ID 851671 kern.notice] dump succeeded

 Here's how the pool shows before I try to import it:
 local...@fsfs:~$ pfexec zpool import
   pool: zp1
 id: 4278074723887284013
  state: ONLINE
 action: The pool can be imported using its name or numeric identifier.
 config:

 zp1 ONLINE
   mirrorONLINE
 c5t0d0  ONLINE
 c5t1d0  ONLINE
   mirrorONLINE
 c6t0d0  ONLINE
 c6t1d0  ONLINE
 local...@fsfs:~$

 Should that report say if a scrub was in progress?

 And here's the crash that happens immediately when I do import it:

 Jan 31 22:34:10 fsfs unix: [ID 836849 kern.notice]
 Jan 31 22:34:10 fsfs ^Mpanic[cpu1]/thread=ff00045fdc80:
 Jan 31 22:34:10 fsfs genunix: [ID 335743 kern.notice] BAD TRAP: type=e
 (#pf Page fault) rp=ff00045fcd60 addr=4e8 occurred in module unix
 due to a NULL pointer dereference
 Jan 31 22:34:10 fsfs unix: [ID 10 kern.notice]
 Jan 31 22:34:10 fsfs unix: [ID 839527 kern.notice] sched:
 Jan 31 22:34:10 fsfs unix: [ID 753105 kern.notice] #pf Page fault
 Jan 31 22:34:10 fsfs unix: [ID 532287 kern.notice] Bad kernel fault at
 addr=0x4e8
 Jan 31 22:34:10 fsfs unix: [ID 243837 kern.notice] pid=0,
 pc=0xfb84e84b, sp=0xff00045fce58, eflags=0x10246
 Jan 31 22:34:10 fsfs unix: [ID 211416 kern.notice] cr0:
 8005003bpg,wp,ne,et,ts,mp,pe cr4: 6f8xmme,fxsr,pge,mce,pae,pse,de
 Jan 31 22:34:10 fsfs unix: [ID 624947 kern.notice] cr2: 4e8
 Jan 31 22:34:10 fsfs unix: [ID 625075 kern.notice] cr3: 340
 Jan 31 22:34:10 fsfs unix: [ID 625715 kern.notice] cr8: c
 Jan 31 22:34:10 fsfs unix: [ID 10 kern.notice]
 Jan 31 22:34:10 fsfs unix: [ID 592667 kern.notice]rdi:  4e8
 rsi: a200 rdx: ff00045fdc80
 Jan 31 22:34:10 fsfs unix: [ID 592667 kern.notice]rcx:1
 r8: ff01599c8540  r9:0
 Jan 31 22:34:10 fsfs unix: [ID 592667 kern.notice]rax:0
 rbx:0 rbp: ff00045fced0
 Jan 31 22:34:10 fsfs unix: [ID 592667 kern.notice]r10: 5b40
 r11:0 r12: ff0161e92040
 Jan 31 22:34:10 fsfs unix: [ID 592667 kern.notice]r13: ff0176684800
 r14: ff0161e92040 r15:0
 Jan 31 22:34:10 fsfs unix: [ID 592667 kern.notice]fsb:0
 gsb: ff0149f68500  ds:   4b
 Jan 31 22:34:10 fsfs unix: [ID 592667 kern.notice] es:   4b
 fs:0  gs:  1c3
 Jan 31 22:34:10 fsfs unix: [ID 592667 kern.notice]trp:e
 err:2 rip: 

Re: [zfs-discuss] j4200 drive carriers

2009-02-01 Thread Will Murnane
On Sun, Feb 1, 2009 at 11:30, Frank Cusack fcus...@fcusack.com wrote:
 recommendations for an alternative?
At work we just ordered a Supermicro CSE-846E1-R900B [1]: 24 hot-swap
bays, redundant 900W power supplies, LSI SAS expander.  This doesn't
quite make a JBOD by itself (it's designed to be used as a case for a
whole system); you need to add a PCI-slot-based external-to-internal
SAS cable [2] and a so-called JBOD control board which makes the case
power up without a motherboard in it.  This is the cheapest SAS
expander-based chassis I've found; even after adding the two
additional parts mentioned above, cost from provantage.com shipped is
around $1250 (depending on your location, of course).  You'll need to
add a SAS controller of your choice and a cable with an SFF-8088
connection on one end (for the JBOD end) and whatever the SAS
controller has on the other end.  I believe you can daisy-chain this
setup by connecting a cable appropriately.  These parts are available
from other suppliers; provantage is linked simply because they have
all of them at one store and the price on the case is the best I've
found.

I haven't received all the parts to try this out yet, but I'm
confident that this will work as specified.  YMMV.

Will

[1]: http://www.provantage.com/supermicro-cse-846e1-r900b~7SUPM1YV.htm
[2]: http://www.provantage.com/supermicro-cbl-0168l~7SUPA01T.htm
[3]: http://www.provantage.com/supermicro-cse-ptjbod-cb1~7SUP9023.htm
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] j4200 drive carriers

2009-02-01 Thread Frédéric VANNIERE
J4200 is cheap compared to custom made solutions (Supermicro + 3Ware). Ask a 
Sun reseller for a quote. There is also the Sun Startup Essentials program with 
great prices on storage systems. 

Last year I've bought an Infortrend RAID array with custom disks but today I 
would choose a J4400 + 24 TB.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] j4200 drive carriers

2009-02-01 Thread Richard Elling
John-Paul Drawneek wrote:
 the J series is far to new to be hitting ebay yet.

 Any alot of people will not be buying the J series for obvious reasons
   

The obvious reason is that Sun cannot service random disk
drives you buy from Fry's (or elsewhere). People who value data
tend to value service contracts for disk drives.
-- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS extended ACL

2009-02-01 Thread Fredrich Maney
On Fri, Jan 30, 2009 at 1:19 PM, Miles Nordin car...@ivy.net wrote:
 fm == Fredrich Maney fredrichma...@gmail.com writes:

fm changing the default toolset (without notification)

 I wouldn't wish for notification all the time and tell people they
 cannot move unless they notify everyone, or you will get a bunch of
 CYA disclaimers and still have no input.

Neither would I, nor is that what I asked for. I would however like to see
changes like this documented in Release Notes or the Project Page.
Unfortunately, that doesn't seem to happen as often as it should.

 And if you won't take the
 time to discuss and compromise, you should not be allowed to block
 people trying to improve things by demanding they seek you out, beg
 for Your Obstinence's attention, and yeild and appeal for the
 generosity of your blessing before they can move.

Are you looking down your nose at just me, or everyone that is not amused
with the it's new, therefor it MUST be better crowd?

 This is the recipe for a stagnant, bit-rotted system, and it's the
 *FIRST* attitude the Linux camp threw over the railing.

fm from the expected native tools to external tools in order to
fm make OpenSolaris more user friendly and familiar to the Linux
fm crowd.

 Yes!  I agree it's a very bad reason.  I've yelled at not a few Linux
 people who came to BSD and complained the installer wasn't friendly
 because it didn't have white-on-blue text.  I told them we are
 building an OS so here are the .tar.gz which is all I ever use, so
 don't talk to me of this remedial installer.  If you want to build
 installers only, you know where to find the other tent.

 ``The old tools are unmaintained, ancient and not supporting expected
 modern standards and features needed to get work done conveniently,
 and not supporting the stream formats and shell scripts of most other
 systems (not just Linux ones!), AND don't come with source,'' are
 several really good reasons.  I think you are WAY too easy on
 yourselves to say these people want something familiar when some of us
 really want something that works well and is not frozen or abandoned.

All of those statements are subjective viewpoints. I look at the feature sets
for the ancient tools and see a lot of  features that are missing from the
GNU alternatives as well as a lot of patches pointing out the lie in
unmaintained.

Just because they don't have the feature de jour found in the latest unstable
build of the so roll your own and smoke it Linux distro doesn't mean that it
is unmaintained or ancient.

 However, just moving the path around is not enough.  Whatever tools
 are going to be the Default ones, the ZFS-specific features need to be
 added to THOSE tools, not just to some tools somewhere that I have to
 go hunting for.  which is how this thread began.  It wasn't about
 ``OMG a GNUism!  why was I not INFORMED?!,'' it was about why is this
 ZFS feature missing from the Default tool?

This thread began because of a feature that was expected, and present
in previous versions, was suddenly missing, due to a fairly significant
change (in impact, not amount of work involved) that was, to all appearances,
arbitrarily made and never communicated.

 This probably means there should eventually be ONE set of default
 tools, unless we are really going to meticulously add these ZFS ACL
 features and every other future architecture to both GNU and ancient
 usr/bin tools and ancient xpg4 tools and xpg6 tools and ucb tools and
 ancient 5bin tools, forever.

Agreed. Oddly enough, that seems to be the path was taken by Sun quite
some time ago with /usr/bin. Those tools are the standard, default tools
on Sun systems for a reason: they are the ones that are maintained and
updated with new features (some internally developed, some ported from
elsewhere where possible and useful).

[...]

fpsm
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] j4200 drive carriers

2009-02-01 Thread Tim
On Sun, Feb 1, 2009 at 2:24 PM, Richard Elling richard.ell...@gmail.comwrote:

 John-Paul Drawneek wrote:
  the J series is far to new to be hitting ebay yet.
 
  Any alot of people will not be buying the J series for obvious reasons
 

 The obvious reason is that Sun cannot service random disk
 drives you buy from Fry's (or elsewhere). People who value data
 tend to value service contracts for disk drives.
 -- richard


I don't think anyone is asking Sun to service disks that aren't theirs, but
to claim that's a reason for not selling drive trays is crap.  You already
claimed in the other thread that Sun has contracts for custom disks so they
don't have to worry about short stroking/right-sizing so it should be pretty
frigging obvious to support if the disks in the JBOD are Sun or some random
crap purchased from fry's.

--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] j4200 drive carriers

2009-02-01 Thread James C. McPherson
On Sun, 01 Feb 2009 14:45:28 -0600
Tim t...@tcsac.net wrote:

 On Sun, Feb 1, 2009 at 2:24 PM, Richard Elling
 richard.ell...@gmail.comwrote:
 
  John-Paul Drawneek wrote:
   the J series is far to new to be hitting ebay yet.
  
   Any alot of people will not be buying the J series for obvious
   reasons
  
 
  The obvious reason is that Sun cannot service random disk
  drives you buy from Fry's (or elsewhere). People who value data
  tend to value service contracts for disk drives.
  -- richard
 
 
 I don't think anyone is asking Sun to service disks that aren't
 theirs, but to claim that's a reason for not selling drive trays is
 crap. 

No, it's not. To claim that it was the *sole* reason
might be, but that's not what Richard said.



James
--
Senior Kernel Software Engineer, Solaris
Sun Microsystems
http://blogs.sun.com/jmcp   http://www.jmcp.homeunix.com/blog
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Oracle raw volumes

2009-02-01 Thread Jim Dunham
Stefan,

 one question related with this: would KAIO be supported on such  
 configuration ?

Yes, but not as one might expect.

As seen from the truss output below, the call to kaio() fails with  
EBADFD, a direct result of the fact that for ZFS its cb_ops interface  
for asynchronous read and write are set to nodev, resulting in ENXIO  
being returned. The kaio() routine detects this ENXIO error, allocates  
a LWP thread to perform the now synchronous I/O, asynchronously.

Although the following following CR does not explicitly call it out, 
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6496357 
, it resulting in the following changes being made to zfs_ioclt.c and  
zvol.c, being the removal of the kernel mode AIO interfaces.

http://cvs.opensolaris.org/source/diff/onnv/onnv-gate/usr/src/uts/common/fs/zfs/zfs_ioctl.c?r1=%2Fonnv%2Fonnv-gate%2Fusr%2Fsrc%2Futs%2Fcommon%2Ffs%2Fzfs%2Fzfs_ioctl.c%403638r2=%2Fonnv%2Fonnv-gate%2Fusr%2Fsrc%2Futs%2Fcommon%2Ffs%2Fzfs%2Fzfs_ioctl.c%403444

http://cvs.opensolaris.org/source/diff/onnv/onnv-gate/usr/src/uts/common/fs/zfs/zvol.c?r1=%2Fonnv%2Fonnv-gate%2Fusr%2Fsrc%2Futs%2Fcommon%2Ffs%2Fzfs%2Fzvol.c%403638r2=%2Fonnv%2Fonnv-gate%2Fusr%2Fsrc%2Futs%2Fcommon%2Ffs%2Fzfs%2Fzvol.c%403461

These routines were removed as part of 6496357, since both invoked  
zvol_strategy, a routine the is somewhat synchronous in nature. Even  
when invoked from multiple LPW threads, parallel threads can still  
synchronize, depending on byte-range-locking at the following location.


http://cvs.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/common/fs/zfs/zvol.c#1130

- Jim


 At least trying to open /dev/zvol/rdsk/datapool/master where master  
 is defined
 as:

 # zpool create -f datapool mirror c1t1d0 c2t0d0
 # zfs create -V 1gb datapool/master
 # ls -lrt /dev/zvol/rdsk/datapool/master
 lrwxrwxrwx 1 root root 39 Feb 1 18:29 /dev/zvol/rdsk/datapool/master  
 - ../../../../devices/pseudo/z...@0:3c,raw


 resulted with:
 # truss -t kaio,lwp_create ./test-kaio /dev/zvol/rdsk/datapool/master
 kaio(AIOINIT) = 0
 lwp_create(0x08046860, LWP_DAEMON|LWP_DETACHED|LWP_SUSPENDED,  
 0x0804685C) = 2
 /2: lwp_create() (returning as new lwp ...) = 0
 /1: kaio(AIOREAD, 5, 0x08046C2C, 1024, 0, 0x0804702C) Err#81 EBADFD
 /1: lwp_create(0x08046820, LWP_DAEMON|LWP_DETACHED|LWP_SUSPENDED,  
 0x0804681C) = 3
 /3: lwp_create() (returning as new lwp ...) = 0
 /1: lwp_create(0x08046820, LWP_DAEMON|LWP_DETACHED|LWP_SUSPENDED,  
 0x0804681C) = 4
 /4: lwp_create() (returning as new lwp ...) = 0
 /1: lwp_create(0x08046820, LWP_DAEMON|LWP_DETACHED|LWP_SUSPENDED,  
 0x0804681C) = 5
 /5: lwp_create() (returning as new lwp ...) = 0
 /1: lwp_create(0x08046820, LWP_DAEMON|LWP_DETACHED|LWP_SUSPENDED,  
 0x0804681C) = 6
 /6: lwp_create() (returning as new lwp ...) = 0
 /1: lwp_create(0x08046820, LWP_DAEMON|LWP_DETACHED|LWP_SUSPENDED,  
 0x0804681C) = 7
 /7: lwp_create() (returning as new lwp ...) = 0
 /4: kaio(AIONOTIFY) = 0
 /1: kaio(AIOWAIT, 0x, 0) = 1
 aio succeeded


 Comments ? The original question came from a real setup using Sybase  
 which spits a lot of:

  733/1:   4137604   6  1 kaio(0x2, 0x,  
 0x1)  = -1 Err#22
   733/1:   4137611   7  3 pollsys(0x110B3A8, 0x1A,  
 0x10009FE6BF0)   = 1 0
   733/1:   4137614   3  0 kaio(0x2, 0x,  
 0x1)   = -1 Err#22
   733/1:   4137622   5  2 recv(0x17, 0x100044F20F0,  
 0x800)   = 109 0
   733/1:   4137626   3  0 kaio(0x2, 0x,  
 0x1)   = -1 Err#22
   733/1:   4137631   6  2 pollsys(0x110B3A8, 0x1A,  
 0x10009FE6BF0)   = 0 0
   733/1:   4137633   3  0 kaio(0x2, 0x,  
 0x1)   = -1 Err#22
   733/1:   4137636   4  1 pollsys(0x110B3A8, 0x1A,  
 0x10009FE6BF0)   = 0 0
   733/1:   4137637   3  0 kaio(0x2, 0x,  
 0x1)   = -1 Err#22
   733/1:   4137641   4  1 pollsys(0x110B3A8, 0x1A,  
 0x10009FE6BF0)   = 0 0
   733/1:   4137642   3  0 kaio(0x2, 0x,  
 0x1)   = -1 Err#22
   733/1:   4137646   4  1 pollsys(0x110B3A8, 0x1A,  
 0x10009FE6BF0)   = 0 0
   733/1:   4137647   3  0 kaio(0x2, 0x,  
 0x1)   = -1 Err#22
   733/1:   4137650   4  1 pollsys(0x110B3A8, 0x1A,  
 0x10009FE6BF0)   = 0 0
   733/1:   4137652   3  0 kaio(0x2, 0x,  
 0x1)   = -1 Err#22
   733/1:   4137655   4  1 pollsys(0x110B3A8, 0x1A,  
 0x10009FE6BF0)   = 0 0
   733/1:   4137657   4  0 kaio(0x2, 0x,  
 0x1)   = -1 Err#22
   733/1:   4137660   4  1 pollsys(0x110B3A8, 0x1A,  
 0x10009FE6BF0)   = 0 0


 Thanks,
 Stefan

 test-kaio.c 
 http://sunsite.uakom.sk/sunworldonline/swol-07-1998/swol-07-insidesolaris.html
 -- 
 This message posted 

Re: [zfs-discuss] j4200 drive carriers

2009-02-01 Thread Bob Friesenhahn
On Sun, 1 Feb 2009, Tim wrote:

 I don't think anyone is asking Sun to service disks that aren't 
 theirs, but to claim that's a reason for not selling drive trays is 
 crap.  You already claimed in the other thread that Sun has 
 contracts for custom disks so they don't have to worry about short 
 stroking/right-sizing so it should be pretty frigging obvious to 
 support if the disks in the JBOD are Sun or some random crap 
 purchased from fry's.

I agree that this is cause for concern.  For example in the past six 
months Sun has silently dropped almost all of its desktop product 
line.  This includes the Sun Ultra-40 that I bought at great expense. 
There is no indication that there will be any remaining products in 
the desktop line and maybe there will be no Sun desktop offering at 
all once the remaining low-end system is discontinued.  Quite a pity 
since Sun desktops were top quality and price competitive for the 
features offered.

Today I ordered the drive extension bay to support four more drives in 
this system simply because I don't trust that Sun will provide 
adequate support for the product it sold to me.  So I purchased 
expansion hardware that I might not even use out of pure unadulterated 
fear.  I have seen how quickly Sun's support for discontinued products 
vaporizes.  I can not afford $6K for the six additional drives at the 
moment so I can only hope that Sun (or some other vendor) will still 
be willing to sell SAS drives with appropriate brackets to fit in this 
system when I am ready to buy them.

Bob
==
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] j4200 drive carriers

2009-02-01 Thread Frank Cusack
On February 1, 2009 12:24:08 PM -0800 Richard Elling 
richard.ell...@gmail.com wrote
 John-Paul Drawneek wrote:
 the J series is far to new to be hitting ebay yet.

 Any alot of people will not be buying the J series for obvious reasons


 The obvious reason is that Sun cannot service random disk
 drives you buy from Fry's (or elsewhere). People who value data
 tend to value service contracts for disk drives.

i don't agree.  disk drives, well non-Sun branded ones, are cheap
enough to just keep on hand.  even for my home systems, i always
buy an extra hard drive of the same size/type the system came with.

i expect drives to fail, and hence i used SVM or now for the past few
years, zfs, for availability.  that covers a failure immediately
until such time, usu. 1 day max, that the physical drive can be swapped.

a service contract for a hard drive seems a complete waste of dollars,
esp. when you consider the 5x markup on the drive.  once i've swapped
in my cold spare, if i can't wait for a warranty replacement i simply
buy another drive.

for parts that might be hard to get, or very expensive, a service contract
makes great sense.  but for hard drives?

-frank
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Two-level ZFS

2009-02-01 Thread Gary Mills
I realize that this configuration is not supported.  What's required
to make it work?  Consider a file server running ZFS that exports a
volume with Iscsi.  Consider also an application server that imports
the LUN with Iscsi and runs a ZFS filesystem on that LUN.  All of the
redundancy and disk management takes place on the file server, but
end-to-end error detection takes place on the application server.
This is a reasonable configuration, is it not?

When the application server detects a checksum error, what information
does it have to return to the file server so that it can correct the
error?  The file server could then retry the read from its redundant
source, which might be a mirror or might be synthentic data from
RAID-5.  It might also indicate that a disk must be replaced.

Must any information accompany each block of data sent to the
application server so that the file server can identify the source
of the data in the event of an error?

Does this additional exchange of information fit into the Iscsi
protocol, or does it have to flow out of band somehow?

-- 
-Gary Mills--Unix Support--U of M Academic Computing and Networking-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] j4200 drive carriers

2009-02-01 Thread Frank Cusack
On February 1, 2009 12:02:11 PM -0800 Frédéric VANNIERE 
f.vanni...@planet-work.com wrote:
 J4200 is cheap compared to custom made solutions (Supermicro + 3Ware).

i can't see how that math works.

by the time i fully populate a J4200 (12TB), Sun charges in the $10k
neighborhood.  i can do this using Promise or Xtore or whatever for $3k.

 Last year I've bought an Infortrend RAID array with custom disks but today
 I would choose a J4400 + 24 TB

again, the Sun enclosure is about the same price as any other enclosure, but
the Sun drives make 24TB cost ... well the Sun store is down right now, but
I'll estimate $30k or just to be generous let's use their advertised figure
of $1/GB, so $24k.  An Xtore with 24TB runs $11k.

-frank
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Bad sectors arises - discs differ in size - trouble?

2009-02-01 Thread Orvar Korvar
You have a raid with 5 terabyte discs. Now some bad sectors arises, so the 
discs differ in size. What happens with the ZFS raid? Will there be seriuos 
trouble? Or is this only a problem when ZFS raid is 100% full?

Of old, in Linux you didnt allocate the entire disc. Instead you let 100MB be 
free so there would be headroom for bad sectors. If there are bad sectors the 
discs will differ in size. Is this a problem?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Two-level ZFS

2009-02-01 Thread Nicolas Williams
On Sun, Feb 01, 2009 at 04:26:13PM -0600, Gary Mills wrote:
 I realize that this configuration is not supported.  What's required

It would be silly for ZFS to support zvols as iSCSI LUNs and then say
you can put anything but ZFS on them.  I'm pretty sure there's no such
restriction.

(That said, I can't speak for the ZFS team, and it's remotely imaginable
-barely- that there could be such a limitation, but it would be strange
indeed.)

 to make it work?  Consider a file server running ZFS that exports a
 volume with Iscsi.  Consider also an application server that imports
 the LUN with Iscsi and runs a ZFS filesystem on that LUN.  All of the
 redundancy and disk management takes place on the file server, but
 end-to-end error detection takes place on the application server.
 This is a reasonable configuration, is it not?

IMO it's no more and no less reasonable than putting a DB on an iSCSI
LUN backed by ZFS.

 When the application server detects a checksum error, what information
 does it have to return to the file server so that it can correct the
 error?  The file server could then retry the read from its redundant
 source, which might be a mirror or might be synthentic data from
 RAID-5.  It might also indicate that a disk must be replaced.

ZFS relies on doing all the mirroring and/or RAID-Z work itself in order
to be able to recover from bad blocks.  If you put the redundancy below
ZFS (e.g., using HW RAID) then you typically lose that capability, BUT
with ZFS on the target side the target-side ZFS will be able to do the
recovery while still retaining end-to-end integrity protection.  Neat,
eh?

 Must any information accompany each block of data sent to the
 application server so that the file server can identify the source
 of the data in the event of an error?

In this case there's no need: the target will know which blocks are bad
and automatically correct (assuming enough redundancy exists).

Nico
-- 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Bad sectors arises - discs differ in size - trouble?

2009-02-01 Thread Eric D. Mudama
On Sun, Feb  1 at 14:45, Orvar Korvar wrote:

 You have a raid with 5 terabyte discs. Now some bad sectors arises,
 so the discs differ in size. What happens with the ZFS raid? Will
 there be seriuos trouble? Or is this only a problem when ZFS raid is
 100% full?

 Of old, in Linux you didnt allocate the entire disc. Instead you let
 100MB be free so there would be headroom for bad sectors. If there
 are bad sectors the discs will differ in size. Is this a problem?


Disk vendors already maintain enough spare capacity so that the drive
can transparently provide defect management for a few hundred thousand
bad sectors without having to report that it's smaller.

If the device reports a smaller capacity than sold originally, short
of things like HPA or DCO, it has likely exhausted its spare pool, and
it's time to RMA the drive with some urgency.

-- 
Eric D. Mudama
edmud...@mail.bounceswoosh.org

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] j4200 drive carriers

2009-02-01 Thread Richard Elling
Tim wrote:


 On Sun, Feb 1, 2009 at 2:24 PM, Richard Elling 
 richard.ell...@gmail.com mailto:richard.ell...@gmail.com wrote:

 John-Paul Drawneek wrote:
  the J series is far to new to be hitting ebay yet.
 
  Any alot of people will not be buying the J series for obvious
 reasons
 

 The obvious reason is that Sun cannot service random disk
 drives you buy from Fry's (or elsewhere). People who value data
 tend to value service contracts for disk drives.
 -- richard


 I don't think anyone is asking Sun to service disks that aren't 
 theirs, but to claim that's a reason for not selling drive trays is 
 crap.  You already claimed in the other thread that Sun has contracts 
 for custom disks so they don't have to worry about short 
 stroking/right-sizing so it should be pretty frigging obvious to 
 support if the disks in the JBOD are Sun or some random crap 
 purchased from fry's.

The drives that Sun sells will come with the correct bracket.
Ergo, there is no reason to sell the bracket as a separate
item unless the customer wishes to place non-Sun disks in
them.  That represents a service liability for Sun, so they are
not inclined to do so.  It is really basic business.
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] j4200 drive carriers

2009-02-01 Thread David Dyer-Bennet

On Sun, February 1, 2009 14:24, Richard Elling wrote:
 John-Paul Drawneek wrote:
 the J series is far to new to be hitting ebay yet.

 Any alot of people will not be buying the J series for obvious reasons


 The obvious reason is that Sun cannot service random disk
 drives you buy from Fry's (or elsewhere). People who value data
 tend to value service contracts for disk drives.

The people who in fact want service contracts covering their drives will
of course buy from Sun; that works well for them, and well for Sun.  Win
win!

Thing is, the people who *don't* want service contracts covering their
drives (which, in my experience, is everybody in the SATA era; it's a
silly waste of money) will be pissed off, and will have to avoid buying
that Sun product.  This is a lose for Sun, and possibly a lose for them.

Will anybody actually be forced to buy their drives from Sun?  If so, is
that something for Sun to be proud of?  It looks to me as if this kind of
policy doesn't bring business to Sun, it drives it away.  At least, it
doesn't bring *happy* business to Sun; it might bring people feeling
coerced and looking for an excuse to escape.

Google, who live and die by their data, prefer to use cheap commodity
hardware.  Since they're *already* set up with redundancy, they can afford
a higher failure rate.  This sort of thinking is true of a lot of high-end
places these days, their reliability requirements are so high that one
good server (or disk drive) can't possibly meet them anyway.
-- 
David Dyer-Bennet, d...@dd-b.net; http://dd-b.net/
Snapshots: http://dd-b.net/dd-b/SnapshotAlbum/data/
Photos: http://dd-b.net/photography/gallery/
Dragaera: http://dragaera.info

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] j4200 drive carriers

2009-02-01 Thread Andre van Eyssen
On Sun, 1 Feb 2009, Richard Elling wrote:


 The drives that Sun sells will come with the correct bracket.
 Ergo, there is no reason to sell the bracket as a separate
 item unless the customer wishes to place non-Sun disks in
 them.  That represents a service liability for Sun, so they are
 not inclined to do so.  It is really basic business.
 -- richard

This thread has been running for a little too long, considering the issues 
are pretty simple.

Sun sells a JBOD storage product, along with disks and accessories. The 
disks they provide are mounted in the correct carriers for the array. The 
pricing of the disks and accessories are part of the price calculation for 
the entire system - you could provide the array empty, with a full set of 
empty carriers but the price will go up.

No brand name storage vendor supports or encourages installation of third 
party disks. It's not the way the business works. If the customer wants 
the reassurance, quality, (etc, etc) associated with buying brand name 
storage, they purchase the disks from the same vendor. If price is more 
critical than these factors, there's a wide range of white box 
solutions on the market. Try approaching IBM, HP, EMC, HDS, NetApp or 
similar and ask to buy an empty JBOD and spare trays - it's not happening.

Yes, this is unfortunate for those who would like to purchase a Sun JBOD 
for home or for a microbusiness. However, these users are probably aware 
that if they want to buy their own spindles and run an unsupported 
configuration, their local metal shop will be happy to bang out some 
frames. Not to mention the fact that one could always run up a set of 
sleds in the shed without too much strife - in fact, in the past when 
spuds were less commonly available, I've seen a home user make sleds out 
of wood that did the job.

Now, I'm looking forward to seeing the first Sun JBOD loaded up with 
CNC-milled mahogany sleds. It'll look great.

-- 
Andre van Eyssen.
mail: an...@purplecow.org  jabber: an...@interact.purplecow.org
purplecow.org: UNIX for the masses http://www2.purplecow.org
purplecow.org: PCOWpix http://pix.purplecow.org

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] j4200 drive carriers

2009-02-01 Thread Bryan Allen
+--
| On 2009-02-01 16:29:59, Richard Elling wrote:
| 
| The drives that Sun sells will come with the correct bracket.
| Ergo, there is no reason to sell the bracket as a separate
| item unless the customer wishes to place non-Sun disks in
| them.  That represents a service liability for Sun, so they are
| not inclined to do so.  It is really basic business.

A specific example of why this policy drives me crazy:

X4150. 8 disk bays. SAS HBA, SATA onboard.

In its initial incarnation, the SAS HBA was optional (this was
fixed soon after its release, but I got bitten by it, ordering
it before support even knew it was released). I had also read
that Sun would be selling 2.5 SATA disks for the system by the
end of the year (2008).

Regardless, mixing SAS and SATA in this system would would be
incredibly useful for any number of applications.

Sun does not sell 2.5 SATA disks for the X4150, and I cannot
purchase sleds.

Further: X4150s are presumably commonly deployed for database
work (that's what both of mine do, along with a lot else).

According to zilstat (thank you, Richard!), r/w SSDs would prove
beneficial.

Sun does not sell SSDs for the X4150, and I cannot purchase
sleds.

Obviously, I could buy 73GB SAS disks and simply repurpose their
sleds. I hope everyone (who is a consumer) can agree that sucks.

rant

The company I work for is a small ESP; we've been around forever
(1995), and until I moved us to Solaris two years ago, we were a
pure Linux shop.

We like Sun software. We like Sun hardware. We had a (small) part
to play in getting DTrace into Perl 5.10. We're proponents of the
company and its technology.

As with everyone else who has asked for this sort of thing, I
just wish Sun would offer me a bit more flexibility so I can more
effectively do my job using their gear.

(I went the eBay route for my X4100 sleds; the way the numbers
came out in the end, it didn't really make a lot of sense to do
it again, so I haven't. But I can actually buy the disks I wanted
for my X4100s from Sun.)

Like all small fish, we have to move fast and be creative with
our purchasing and infrastructure. Sun's focus on large
deployments and Corporate can often leave us little guys in the
cold.

Given Sun's business focus and climate (as Richard says, it's
basic business), I've mostly learned to suck it up, but that
doesn't make things like this taste any less bitter.

/rant
-- 
bda
Cyberpunk is dead.  Long live cyberpunk.
http://mirrorshades.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Two-level ZFS

2009-02-01 Thread Jim Dunham
Gary,

 I realize that this configuration is not supported.

The configuration is supported, but not in the manner mentioned below.

If there are two (or more) instances of ZFS in the end-to-end data  
path, each instance is responsible for its own redundancy and error  
recovery. There is no in-band communication between one instance of  
ZFS and another instances of ZFS located elsewhere in the same end-to- 
end data path.

A key understanding is the a ZVOL provides the same block I/O  
semantics as any other Solaris block device, therefore when a ZVOL is  
configured as an iSCSI Target, and the target is accessed by an iSCSI  
Initiator LU, there is no awareness that a ZVOL is the backing-store  
of this LU.

Although not quite the same , this ZFS discussion list raises  
questions about configuring ZFS on RAID enable storage arrays, and how  
using simple JBODs might be a better solution.

Jim Dunham
Engineering Manager
Sun Microsystems, Inc.
Storage Platform Software Group

 What's required
 to make it work?  Consider a file server running ZFS that exports a
 volume with Iscsi.  Consider also an application server that imports
 the LUN with Iscsi and runs a ZFS filesystem on that LUN.  All of the
 redundancy and disk management takes place on the file server, but
 end-to-end error detection takes place on the application server.
 This is a reasonable configuration, is it not?

 When the application server detects a checksum error, what information
 does it have to return to the file server so that it can correct the
 error?  The file server could then retry the read from its redundant
 source, which might be a mirror or might be synthentic data from
 RAID-5.  It might also indicate that a disk must be replaced.

 Must any information accompany each block of data sent to the
 application server so that the file server can identify the source
 of the data in the event of an error?

 Does this additional exchange of information fit into the Iscsi
 protocol, or does it have to flow out of band somehow?

 -- 
 -Gary Mills--Unix Support--U of M Academic Computing and  
 Networking-
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] j4200 drive carriers

2009-02-01 Thread Andre van Eyssen
On Sun, 1 Feb 2009, Bob Friesenhahn wrote:


 I am worried that Sun is primarily interested in new business and
 tends to not offer replacement/new drives for as long as the actual
 service-life of the array.  What is the poor customer to do when Sun
 is no longer willing to offer a service contract and Sun is no longer
 willing to sell drives (or even the carriers) for the array?

You can still procure replacement drives for real vintage kit, like the 
A1000/D1000 arrays. I doubt your argument is valid. As a side point, by 
the time these arrays are dead  buried, the sleds for them will no doubt 
be as common as spuds (and don't we all have at least 30 of those lying 
around?)

 Sometimes it is only a matter of weeks before Sun stops offering
 supportive components.  For example, my Ultra 40 was only discontinued
 a month or so ago but already Sun somehow no longer lists memory for
 it (huh?).

If your trusty Sun partner couldn't supply you with memory for an 
Ultra-40, I'd take that as a sign to find a new partner. EOSL products 
vanish from websites but parts can still be ordered for them.

-- 
Andre van Eyssen.
mail: an...@purplecow.org  jabber: an...@interact.purplecow.org
purplecow.org: UNIX for the masses http://www2.purplecow.org
purplecow.org: PCOWpix http://pix.purplecow.org

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] j4200 drive carriers

2009-02-01 Thread Richard Elling
Bryan Allen wrote:
 +--
 | On 2009-02-01 16:29:59, Richard Elling wrote:
 | 
 | The drives that Sun sells will come with the correct bracket.
 | Ergo, there is no reason to sell the bracket as a separate
 | item unless the customer wishes to place non-Sun disks in
 | them.  That represents a service liability for Sun, so they are
 | not inclined to do so.  It is really basic business.

 A specific example of why this policy drives me crazy:

 X4150. 8 disk bays. SAS HBA, SATA onboard.

 In its initial incarnation, the SAS HBA was optional (this was
 fixed soon after its release, but I got bitten by it, ordering
 it before support even knew it was released). I had also read
 that Sun would be selling 2.5 SATA disks for the system by the
 end of the year (2008).

 Regardless, mixing SAS and SATA in this system would would be
 incredibly useful for any number of applications.

 Sun does not sell 2.5 SATA disks for the X4150, and I cannot
 purchase sleds.

 Further: X4150s are presumably commonly deployed for database
 work (that's what both of mine do, along with a lot else).

 According to zilstat (thank you, Richard!), r/w SSDs would prove
 beneficial.

 Sun does not sell SSDs for the X4150, and I cannot purchase
 sleds.

 Obviously, I could buy 73GB SAS disks and simply repurpose their
 sleds. I hope everyone (who is a consumer) can agree that sucks.

 rant

 The company I work for is a small ESP; we've been around forever
 (1995), and until I moved us to Solaris two years ago, we were a
 pure Linux shop.

 We like Sun software. We like Sun hardware. We had a (small) part
 to play in getting DTrace into Perl 5.10. We're proponents of the
 company and its technology.

 As with everyone else who has asked for this sort of thing, I
 just wish Sun would offer me a bit more flexibility so I can more
 effectively do my job using their gear.
   

+1

The astute observer will note that the bracket for the X41xx family
works elsewhere. For example,
http://sunsolve.sun.com/handbook_pub/validateUser.do?target=Systems/SunFireX4150/components#Disks
XRA-SS2CF-73G10K contains 541-2123
http://sunsolve.sun.com/handbook_pub/validateUser.do?target=Systems/7410/components#Disks
XTA7410-LOGZ18GB contains 570-1182
XTA7410-READZ100G contains 541-2123

But that is just the mechanical parts. Unfortunately, there is also
firmware to be considered, though that tends to solve itself over time.

One would think that Sun could have leveraged the common chassis,
but they do not seem capable of doing so. Don't look for logic here,
there is none.
-- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] j4200 drive carriers

2009-02-01 Thread Bryan Allen
+--
| On 2009-02-01 20:55:46, Richard Elling wrote:
| 
| The astute observer will note that the bracket for the X41xx family
| works elsewhere. For example,
| 
http://sunsolve.sun.com/handbook_pub/validateUser.do?target=Systems/SunFireX4150/components#Disks
| XRA-SS2CF-73G10K contains 541-2123
| 
http://sunsolve.sun.com/handbook_pub/validateUser.do?target=Systems/7410/components#Disks
| XTA7410-LOGZ18GB contains 570-1182
| XTA7410-READZ100G contains 541-2123

I had noted that, when looking for the sleds alone, though not
those specific cases. Useful!

| But that is just the mechanical parts. Unfortunately, there is also
| firmware to be considered, though that tends to solve itself over time.

Hopefully!

| One would think that Sun could have leveraged the common chassis,
| but they do not seem capable of doing so. Don't look for logic here,
| there is none.

mm.
-- 
bda
Cyberpunk is dead.  Long live cyberpunk.
http://mirrorshades.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss