Re: [zfs-discuss] Help identify failed drive

2010-07-19 Thread Michael Shadle
On Mon, Jul 19, 2010 at 3:11 PM, Haudy Kazemi kaze0...@umn.edu wrote:

 ' iostat -Eni ' indeed outputs Device ID on some of the drives,but I still
 can't understand how it helps me to identify model of specific drive.

Curious:

[r...@nas01 ~]# zpool status -x
  pool: tank
 state: DEGRADED
status: One or more devices has experienced an unrecoverable error.  An
attempt was made to correct the error.  Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
using 'zpool clear' or replace the device with 'zpool replace'.
   see: http://www.sun.com/msg/ZFS-8000-9P
 scrub: scrub completed after 14h2m with 0 errors on Sun Jul 18 18:32:38 2010
config:

NAMESTATE READ WRITE CKSUM
tankDEGRADED 0 0 0
  raidz2ONLINE   0 0 0
...
  raidz2DEGRADED 0 0 0
...
c2t5d0  DEGRADED 0 0 0  too many errors
...


c2t5d0   Soft Errors: 0 Hard Errors: 0 Transport Errors: 0
Vendor: ATA  Product: ST31500341AS Revision: SD1B Device Id:
id1,s...@sata_st31500341as9vs077gt
Size: 1500.30GB 1500301910016 bytes
Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0
Illegal Request: 0 Predictive Failure Analysis: 0


Why has it been reported as bad (for probably 2 months now, I haven't
got around to figuring out which disk in the case it is etc.) but the
iostat isn't showing me any errors.

Note: I do a weekly scrub too. Not sure if that matters or helps reset
the device.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Help identify failed drive

2010-07-19 Thread Michael Shadle
On Mon, Jul 19, 2010 at 4:16 PM, Marty Scholes martyscho...@yahoo.com wrote:

 Start a scrub or do an obscure find, e.g. find /tank_mointpoint -name core 
 and watch the drive activity lights.  The drive in the pool which isn't 
 blinking like crazy is a faulted/offlined drive.

 Ugly and oh-so-hackerish, but it works.

that was my idea except figuring out something to make just specific
drives write one at a time. although if it has been offlined or
whatever then it shouldn't receive any requests, that sounds even
easier. :)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Help identify failed drive

2010-07-19 Thread Michael Shadle
On Mon, Jul 19, 2010 at 4:16 PM, Marty Scholes martyscho...@yahoo.com wrote:

 Start a scrub or do an obscure find, e.g. find /tank_mointpoint -name core 
 and watch the drive activity lights.  The drive in the pool which isn't 
 blinking like crazy is a faulted/offlined drive.

Actually I guess my real question is why iostat hasn't logged any
errors in its counters even though the device has been bad in there
for months?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Help identify failed drive

2010-07-19 Thread Michael Shadle
On Mon, Jul 19, 2010 at 4:26 PM, Richard Elling rich...@nexenta.com wrote:

 Aren't you assuming the I/O error comes from the drive?
 fmdump -eV

okay - I guess I am. Is this just telling me hey stupid, a checksum
failed ? In which case why did this never resolve itself and the
specific device get marked as degraded?

Apr 04 2010 21:52:38.920978339 ereport.fs.zfs.checksum
nvlist version: 0
class = ereport.fs.zfs.checksum
ena = 0x64350d4040300c01
detector = (embedded nvlist)
nvlist version: 0
version = 0x0
scheme = zfs
pool = 0xfd80ebd352cc9271
vdev = 0x29282dc6fa073a2
(end detector)

pool = tank
pool_guid = 0xfd80ebd352cc9271
pool_context = 0
pool_failmode = wait
vdev_guid = 0x29282dc6fa073a2
vdev_type = disk
vdev_path = /dev/dsk/c2t5d0s0
vdev_devid = id1,s...@sata_st31500341as9vs077gt/a
parent_guid = 0xc2d5959dd2c07bf7
parent_type = raidz
zio_err = 0
zio_offset = 0x40abbf2600
zio_size = 0x200
zio_objset = 0x10
zio_object = 0x1c06000
zio_level = 2
zio_blkid = 0x0
__ttl = 0x1
__tod = 0x4bb96c96 0x36e503a3
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Help identify failed drive

2010-07-19 Thread Michael Shadle
On Mon, Jul 19, 2010 at 4:35 PM, Richard Elling rich...@nexenta.com wrote:

 I depends on if the problem was fixed or not.  What says
        zpool status -xv

  -- richard

[r...@nas01 ~]# zpool status -xv
  pool: tank
 state: DEGRADED
status: One or more devices has experienced an unrecoverable error.  An
attempt was made to correct the error.  Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
using 'zpool clear' or replace the device with 'zpool replace'.
   see: http://www.sun.com/msg/ZFS-8000-9P
 scrub: scrub completed after 14h2m with 0 errors on Sun Jul 18 18:32:38 2010
config:

NAMESTATE READ WRITE CKSUM
tankDEGRADED 0 0 0
  raidz2ONLINE   0 0 0
c0t3d0  ONLINE   0 0 0
c0t2d0  ONLINE   0 0 0
c0t4d0  ONLINE   0 0 0
c0t1d0  ONLINE   0 0 0
c0t6d0  ONLINE   0 0 0
c0t7d0  ONLINE   0 0 0
c0t0d0  ONLINE   0 0 0
c0t5d0  ONLINE   0 0 0
  raidz2DEGRADED 0 0 0
c2t0d0  ONLINE   0 0 0
c2t1d0  ONLINE   0 0 0
c2t2d0  ONLINE   0 0 0
c2t3d0  ONLINE   0 0 0
c2t4d0  ONLINE   0 0 0
c2t5d0  DEGRADED 0 0 0  too many errors
c2t6d0  ONLINE   0 0 0
c2t7d0  ONLINE   0 0 0

was never fixed. I thought I needed to replace the drive. Should I
mark it as resolved or whatever the syntax is and re-run a scrub?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Native ZFS for Linux

2010-06-11 Thread Michael Shadle
On Fri, Jun 11, 2010 at 2:50 AM, Alex Blewitt alex.blew...@gmail.com wrote:

 You are sadly mistaken.

 From GNU.org on license compatibilities:

 http://www.gnu.org/licenses/license-list.html

        Common Development and Distribution License (CDDL), version 1.0
        This is a free software license. It has a copyleft with a scope
 that's similar to the one in the Mozilla Public License, which makes it
 incompatible with the GNU GPL. This means a module covered by the GPL and a
 module covered by the CDDL cannot legally be linked together. We urge you
 not to use the CDDL for this reason.

        Also unfortunate in the CDDL is its use of the term “intellectual
 property”.

 Whether a license is classified as Open Source or not does not imply that
 all open source licenses are compatible with each other.

Can we stop the license talk *yet again*

Nobody here is a lawyer (IANAL!) and everyone has their own
interpretations and are splitting hairs.

In my opinion, the source code itself shouldn't be ported, the
CONCEPTS should be. Then there's no licensing issues at all. No
questions. etc.

To me, ZFS is important for bitrot protection, pooled storage and
snapshots come in handy in a couple places. Getting a COW filesystem
w/ snapshots and storage pooling would cover a lot of the demand for
ZFS as far as I'm concerned. (However, that's when a comparison with
Btrfs makes sense as it is COW too)

The minute I saw ZFS on Linux I knew this would degrade into a
virtual pissing contest on my understanding is better than yours and
a licensing fight.

To me, this is what needs to happen:

a) Get a Sun/Oracle attorney involved who understands this and flat
out explains what needs to be done to allow ZFS to be used with the
Linux kernel, or
b) Port the concepts and not the code (or the portions of code under
the restrictive license), or
c) Look at Btrfs or other filesystems which may be extended to give
the same capabilities as ZFS without the licensing issue and focus all
this development time on extending those.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Exporting iSCSI - it's still getting all the ZFS protection, right?

2010-05-03 Thread Michael Shadle
Quick sanity check here. I created a zvol and exported it via iSCSI to
a Windows machine so Windows could use it as a block device. Windows
formats it as NTFS, thinks it's a local disk, yadda yadda.

Is ZFS doing it's magic checksumming and whatnot on this share, even
though it is seeing junk data (NTFS on top of iSCSI...) or am I not
getting any benefits from this setup at all (besides thin
provisioning, things like that?)

These were my steps:

Make sure service is enabled on the NAS:
# svcadm enable /system/iscsitgt

Create the zvol and share it (sparse volume size of 1000 gigs)
# zfs create -s -V 1000g tank/shares/foo
# zfs set shareiscsi=on tank/shares/foo
# zfs set compression=on tank/shares/foo

Install Microsoft iSCSI Software Initiator Version 2.08 on the Windows server
http://www.microsoft.com/downloads/details.aspx?familyid=12cb3c1a-15d6-4585-b385-befd1319f825displaylang=en

Thanks in advance...
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS for my home RAID? Or Linux Software RAID?

2010-03-07 Thread Michael Shadle
On Sun, Mar 7, 2010 at 6:09 PM, Slack-Moehrle
mailingli...@mailnewsrss.com wrote:

 OpenSolaris or FreeBSD with ZFS?

zfs for sure. it's nice having something bitrot-resistant.

it was designed with data integrity in mind.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Hardware for high-end ZFS NAS file server - 2010 March edition

2010-03-04 Thread Michael Shadle
If I had a decently ventilated closet or space to do it in I wouldn't
mind noise, but I don't, that's why I had to build my storage machines
the way I did.

On Thu, Mar 4, 2010 at 12:23 PM, Thomas Burgess wonsl...@gmail.com wrote:
 its not quiet by default but it can be made somewhat more quiet by swapping
 out the fans or going to larger fans.  Its still totally worth it.

 I use smaller, silent htpc's for the actual media and connect to the norco
 over gigabit.

 My norco box is connected to the network with 2 link aggregated gigabit
 ethernet cables.

 It's very nice.


 On Thu, Mar 4, 2010 at 3:03 PM, Michael Shadle mike...@gmail.com wrote:

 On Thu, Mar 4, 2010 at 4:12 AM, Thomas Burgess wonsl...@gmail.com wrote:

  I got a norco 4020 (the 4220 is good too)
 
  Both of those cost around 300-350 dolars.  That is a 4u case with 20 hot
  swap bays.

 Typically rackmounts are not designed for quiet. He said quietness is
 #2 in his priorities...

 Or does the Norco unit perform quietly or have the ability to be quieter?


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Media server build

2010-01-30 Thread Michael Shadle

This is my setup:
http://michaelshadle.com/2009/09/28/my-recipe-for-zfs-at-home/

It runs pretty quiet. I tried to swap the fans out on the 5-in-3 units  
but couldn't get it to work, although I didn't put much effort into  
it. I actually have two identical machines now. One runs SXCE. The  
other is Sol10u7 I think (it's Solaris 10 though) - I forgot to move  
the data off the SXCE one so I could convert it. I don't really need  
anything from opensolaris technically. Time slider would be the only  
thing of benefit to me. Possibly the in-kernel CIFS server now that  
the bug is supposedly fixed.


Anyway they work great and I feel safe knowing ZFS is working it's  
magic to

protect my data (as much as any filesystem can)

Sent from my iPhone

On Jan 30, 2010, at 2:32 PM, Simon Breden sbre...@gmail.com wrote:

Good to hear someone else confirming the greatness of this ION  
platform for an HTPC. BTW, how do you keep all those drives quiet?  
Do you use a lot of silicone grommets on the drive screws, or some  
other form of vibration damping?


Cheers,
Simon
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Media server build

2010-01-29 Thread Michael Shadle
On Fri, Jan 29, 2010 at 12:53 PM, Simon Breden sbre...@gmail.com wrote:
 I have used OpenSolaris on the NAS and XBMC as the media player, and it works 
 greatl.

Same here, although I use a normal modded XBOX. I am thinking of
switching to a Mac Mini w/ Plex soon (a friend's setup is really
awesome) - I want more horsepower under the hood. The XBOX is dated
now, and won't even play certain DVDs.

I will note that one version of OpenSolaris (supposedly fixed now)
could not load .iso files above something like 2.6 gigs using the
in-kernel CIFS service. Switching it back to userland samba worked
fine. My whole reason for using SXCE was for the CIFS service :p
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Media server build

2010-01-29 Thread Michael Shadle
On Fri, Jan 29, 2010 at 1:31 PM, Simon Breden sbre...@gmail.com wrote:

 Yes, a modded XBOX will play a lot of things but will struggle with highly 
 compressed streams and will fail at HD etc. The ION platform is especially 
 interesting as these boxes are really cheap, and you can slap Linux + XBMC on 
 there for free. Yes, I also hear that Plex running on a Mac Mini is good, but 
 they are more expensive than an ION-based box. ION can play HD apparently. 
 And what about Plex? I think it's a fork of XBMC. Does it have the same level 
 of development support as XBMC?

Not sure this is the right place to discuss all that :)

It is a fork of XBMC. I am not sure where it improves upon it, but
I've seen it running in person, 1080p/mkv/looks gorgeous, downloads
info about the movies and all sorts of great stuff.

I've thought about an ION solution. Small, cheaper, etc. But XBMC on
top of Windows scares me a bit. I don't consider Windows a stable
foundation for anything. Linux-based XBMC ports too were annoying to
try to get going a while back. Think it's better now...

The normal XBOX can do 1080i at the max, and the menus/UI look great,
but it struggles with mkvs and other things. It's one of those great
a few years ago solutions.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Comments on home OpenSolaris/ZFS server

2009-09-30 Thread Michael Shadle
i looked at possibly doing one of those too - but only 5 disks was too
small for me. and i was too nervous about compatibility with mini-itx
stuff.

On Wed, Sep 30, 2009 at 6:22 PM, Jorgen Lundman lund...@gmo.jp wrote:

 I too went with a 5in3 case for HDDs, in a nice portable Mini-ITX case, with
 Intel Atom. More of a SOHO NAS for home use, rather than a beast. Still, I
 can get about 10TB in it.

 http://lundman.net/wiki/index.php/ZFS_RAID

 I can also recommend the embeddedSolaris project for making a small bootable
 Solaris. Very flexible and can put on the Admin GUIs, and so on.

 https://sourceforge.net/projects/embeddedsolaris/

 Lund

 --
 Jorgen Lundman       | lund...@lundman.net
 Unix Administrator   | +81 (0)3 -5456-2687 ext 1017 (work)
 Shibuya-ku, Tokyo    | +81 (0)90-5578-8500          (cell)
 Japan                | +81 (0)3 -3375-1767          (home)
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Comments on home OpenSolaris/ZFS server

2009-09-28 Thread Michael Shadle
This seems like you're doing an awful lot of planning for only 8 SATA
+ 4 SAS bays?

I agree - SOHO usage of ZFS is still a scary will this work? deal. I
found a working setup and I cloned it. It gives me 16x SATA + 2x SATA
for mirrored boot, 4GB ECC RAM and a quad core processor - total cost
without disks was ~ $1k I believe. Not too shabby. Emphasis was also
for acoustics - rack dense would be great but my current living
situation doesn't warrant that. The noisiest components are the 5-in-3
chassis used in the front of the case. I have to keep the fans on high
(I tried to swap out for larger, quieter fans, but could not get the
fan alarm to shut up) or they go over Seagate's recommended = 50
degrees.

I really should post my parts list up on my blog. I had to choose
everything to the best of my research online and hope for the best.


On Mon, Sep 28, 2009 at 1:12 PM, Ware Adams rwali...@washdcmail.com wrote:
 Hello,

 I have been researching building a home storage server based on OpenSolaris
 and ZFS, and I would appreciate any time people could take to comment on my
 current leanings.

 I've tried to gather old information from this list as well as the HCL, but
 I would welcome anyone's experience on both compatibility and
 appropriateness for my goals.  I'd love if that white box server wiki page
 were set up now, but for now I'll have to just ask here.

 My priorities:

 1)  Data security.  I'm hoping I can get this via ECC RAM and enterprise
 drives that hopefully don't lie to ZFS about flushing to disk?  I'll run
 mirrored pools for redundancy (which leads me to want a case w/a lot of
 bays).
 2)  Compatibility.  For me this translates into low upkeep cost (time).  I'm
 not looking to be the first person to get OpenSolaris running on some
 particular piece of hardware.
 3)  Scaleable.  I'd like to not have to upgrade every year.  I can always
 use something like an external JBOD array, but there's some appeal to having
 enough space in the case for reasonable growth.  I'd also like to have
 enough performance to keep up with scaling data volume and ZFS features.
 4)  Ability to run some other (lightweight) services on the box.  I'll be
 using NFS (iTunes libraries for OS X clients) and iSCSI (Time Machine
 backups) primarily, but my current home server also runs a few small
 services (MySQL etc...) that are very lightweight but nevertheless might be
 difficult to do on a ZFS (or ZFS like) appliance
 5)  Cost.  All things being equal cheaper is better, but I'm willing to pay
 more to accomplish particularly 1-3 above.

 My current thinking:

 SuperMicro 7046A-3 Workstation
 http://supermicro.com/products/system/4U/7046/SYS-7046A-3.cfm
 8 hot swappable drive bays (SAS or SATA, I'd use SATA)
 Network/Main board/SAS/SATA controllers seem well supported by OpenSolaris
 Will take IPMI card for remote admin (with video and iso redirection)
 12 RAM slots so I can buy less dense chips
 2x 5.25 drive bays.  I'd use a SuperMicro Mobile Rack M14T
 (http://www.supermicro.com/products/accessories/mobilerack/CSE-M14.cfm) to
 get 4 2.5 SAS drives in one of these.  2 would be used for a mirrored boot
 pool leaving 2 for potential future use (like a ZIL on SSD).

 Nehalem E5520 CPU
 These are clearly more than enough now, but I'm hoping to have decent CPU
 performance for say 5 years (and I'm willing to pay for it up front vs.
 upgrading every 2 years...I don't want this to be too time consuming of a
 hobby).  I'd like to have processor capacity for compression and (hopefully
 reasonably soon) de-duplication as well as obviously support ECC RAM.

 Crucial RAM in 4 GB density (price scales linearly up through this point and
 I've had good support from Crucial)

 Seagate Barracuda ES.2 1TB SATA (Model ST31000340NS) for storage pool.  I
 would like to use a larger drive, but I can't find anything rated to run
 24x7 larger than 1TB from Seagate.  I'd like to have drives rated for 24x7
 use, and I've had good experience w/Seagate.  Again, a larger case gives me
 some flexibility here.

 Misc (mainly interested in compatibility b/c it will hardly be used):
 Sun XVR-100 video card from eBay
 Syba SY-PCI45004
 (http://www.newegg.com/Product/Product.aspx?Item=N82E16816124025) IDE card
 for CD-ROM
 Sony DDU1678A
 (http://www.newegg.com/Product/Product.aspx?Item=N82E16827131061) CD-ROM

 Thanks a lot for any thoughts you might have.

 --Ware
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Comments on home OpenSolaris/ZFS server

2009-09-28 Thread Michael Shadle
Yeah - give me a bit to rope together the parts list and double check
it, and I will post it on my blog.


On Mon, Sep 28, 2009 at 2:34 PM, Ware Adams rwali...@washdcmail.com wrote:
 On Sep 28, 2009, at 4:20 PM, Michael Shadle wrote:

 I agree - SOHO usage of ZFS is still a scary will this work? deal. I
 found a working setup and I cloned it. It gives me 16x SATA + 2x SATA
 for mirrored boot, 4GB ECC RAM and a quad core processor - total cost
 without disks was ~ $1k I believe. Not too shabby. Emphasis was also
 for acoustics - rack dense would be great but my current living
 situation doesn't warrant that

 This sounds interesting.  Do you have any info on it (case you started with,
 etc...).

 I'm concerned about noise too as this will be in a closet close to the room
 where our television is.  Currently there is a MacPro in there which isn't
 terribly quiet, but the SuperMicro case is reported to be fairly quiet.

 Thanks,
 Ware

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Comments on home OpenSolaris/ZFS server

2009-09-28 Thread Michael Shadle
rackmount chassis aren't usually designed with acoustics in mind :)

however i might be getting my closet fitted so i can put half a rack
in. might switch up my configuration to rack stuff soon.

On Mon, Sep 28, 2009 at 3:04 PM, Thomas Burgess wonsl...@gmail.com wrote:
 personally i like this case:


 http://www.newegg.com/Product/Product.aspx?Item=N82E16811219021

 it's got 20 hot swap bays, and it's surprisingly well built.  For the money,
 it's an amazing deal.



 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Comments on home OpenSolaris/ZFS server

2009-09-28 Thread Michael Shadle
well when i start looking into rack configurations i will consider it. :)

here's my configuration - enjoy!
http://michaelshadle.com/2009/09/28/my-recipe-for-zfs-at-home/

On Mon, Sep 28, 2009 at 3:10 PM, Thomas Burgess wonsl...@gmail.com wrote:
  i own this case, it's really not that bad.  It's got 4 fans but they are
 really big and don't make nearly as much noise as you'd think.  honestly,
 it's not bad at all.  I know someone who sits it vertically as well,
 honestly, it's a good case for the money


 On Mon, Sep 28, 2009 at 6:06 PM, Michael Shadle mike...@gmail.com wrote:

 rackmount chassis aren't usually designed with acoustics in mind :)

 however i might be getting my closet fitted so i can put half a rack
 in. might switch up my configuration to rack stuff soon.

 On Mon, Sep 28, 2009 at 3:04 PM, Thomas Burgess wonsl...@gmail.com
 wrote:
  personally i like this case:
 
 
  http://www.newegg.com/Product/Product.aspx?Item=N82E16811219021
 
  it's got 20 hot swap bays, and it's surprisingly well built.  For the
  money,
  it's an amazing deal.
 
 
 
  ___
  zfs-discuss mailing list
  zfs-discuss@opensolaris.org
  http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
 
 


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Petabytes on a budget - blog

2009-09-02 Thread Michael Shadle
Yeah I wrote them about it. I said they should sell them and even  
better pair it with their offsite backup service kind of like a  
massive appliance and service option.


They're not selling them but did encourage me to just make a copy of  
it. It looks like the only questionable piece in it is the port  
multipliers. Sil3726 if I recall. Which I think just barely is  
becoming supported in the most recent snvs? That's been something I've  
been wanting forever anyway.


You could also just design your own case that is optimized for a bunch  
of disks, a mobo as long as it has ECC support and enough pci/pci-x/ 
pcie slots for the amount of cards to add. You might be able to build  
one without port multipliers and just use a bunch of 8, 12, or 16 port  
sata controllers.


I want to design a case that has two layers - an internal layer with  
all the drives and guts and an external layer that pushes air around  
it to exhaust it quietly and has additional noise dampening...


Sent from my iPhone

On Sep 2, 2009, at 11:01 AM, Al Hopper a...@logical-approach.com wrote:


Interesting blog:

http://blog.backblaze.com/2009/09/01/petabytes-on-a-budget-how-to-build-cheap-cloud-storage/

Regards,

--
Al Hopper  Logical Approach Inc,Plano,TX a...@logical-approach.com
  Voice: 972.379.2133 Timezone: US CDT
OpenSolaris Governing Board (OGB) Member - Apr 2005 to Mar 2007
http://www.opensolaris.org/os/community/ogb/ogb_2005-2007/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Petabytes on a budget - blog

2009-09-02 Thread Michael Shadle
IMHO it depends on the usage model. Mine is for home storage. A couple
HD streams at most. 40mB/sec over a gigabit network switch is pretty
good with me.

On Wed, Sep 2, 2009 at 11:54 AM, Jacob Ritortojacob.rito...@gmail.com wrote:
 Torrey McMahon wrote:

 3) Performance isn't going to be that great with their design but...they
 might not need it.


 Would you be able to qualify this assertion?  Thinking through it a bit,
 even if the disks are better than average and can achieve 1000Mb/s each,
 each uplink from the multiplier to the controller will still have 1000Gb/s
 to spare in the slowest SATA mode out there.  With (5) disks per multiplier
 * (2) multipliers * 1000GB/s each, that's 1Gb/s at the PCI-e interface,
 which approximately coincides with a meager 4x PCI-e slot.
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Petabytes on a budget - blog

2009-09-02 Thread Michael Shadle
Probably due to the lack of port multiplier support. Or perhaps they  
run software for monitoring that only

works on Linux.

Sent from my iPhone

On Sep 2, 2009, at 4:33 PM, Trevor Pretty trevor_pre...@eagle.co.nz  
wrote:






Overall, the product is what it is.  There is nothing wrong with it  
in the
right situation although they have trimmed some corners that I  
wouldn't
have trimmed in their place.  However, comparing it to a NetAPP or  
an EMC

is to grossly misrepresent the market.
I don't think that is what they where doing. I think they where  
trying to point out they had $X budget and wanted to buy YPB of  
storage and building their own was cheaper than buying it. No  
surprise there! However they don't show their RD costs. I'm sure  
the designers don't work for nothing, although to their credit they  
do share the H/W design and have made is open source. They also  
mention www.protocase.com will make them for you so if you want to  
build your own then you have no RD costs.


I would love to know why they did not use ZFS.


This is the equivalent of seeing
how many USB drives you can plug in as a storage solution.  I've  
seen this

done.


Julian
--
Julian King
Computer Officer, University of Cambridge, Unix Support
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss



--
Trevor Pretty | +64 9 639 0652 | +64 21 666 161
Eagle Technology Group Ltd.
Gate D, Alexandra Park, Greenlane West, Epsom
Private Bag 93211, Parnell, Auckland





www.eagle.co.nz
This email is confidential and may be legally privileged. If  
received in error please destroy and immediately notify us.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Raidz vdev size... again.

2009-04-27 Thread Michael Shadle
On Mon, Apr 27, 2009 at 4:51 PM, Scott Lawson
scott.law...@manukau.ac.nz wrote:

 If possible though you would be best to let the 3ware controller expose
 the 16 disks as a JBOD  to ZFS and create a RAIDZ2 within Solaris as you
 will then
 gain the full benefits of ZFS. Block self healing etc etc.

 There isn't an issue in using a larger amount of disks in a RAIDZ2, just
 that it
 is not the optimal size. Longer rebuild times for larger vdev's in a zpool
 (although this
 is proportional to how full the pool is.). Two parity disks gives you
 greater cover in
 the event of a drive failing in a large vdev stripe.

Hmm, this is a bit disappointing to me. I would have dedicated only 2
disks out of 16 then to a single large raidz2 instead of two 8 disk
raidz2's (meaning 4 disks went to parity)

I was still operating under the impression that vdevs larger than 7-8
disks typically make baby Jesus nervous.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Can this be done?

2009-04-08 Thread Michael Shadle
 Wed, Apr 8, 2009 at 9:39 AM, Miles Nordin car...@ivy.net wrote:
 ms == Michael Shadle mike...@gmail.com writes:

    ms When I attach this new raidz2, will ZFS auto rebalance data
    ms between the two, or will it keep the other one empty and do
    ms some sort of load balancing between the two for future writes
    ms only?

 the second choice.

I actually have to move a bunch of files around anyway, so what I am
planning on doing is waiting until tonight (hopefully) when I add my
second raidz2 vdev and then doing the move. It's between two ZFS
filesystems on the same zpool, hopefully I might be able to help force
rebalance the data a bit (an idea someone had) - either way it's
something I have to do, so I might be able to get some possible
additional benefit out of it too :)

also Cindy: thanks for the add vs. attach correction.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Can this be done?

2009-04-07 Thread Michael Shadle
On Wed, Apr 1, 2009 at 3:19 AM, Michael Shadle mike...@gmail.com wrote:
 I'm going to try to move one of my disks off my rpool tomorrow (since
 it's a mirror) to a different controller.

 According to what I've heard before, ZFS should automagically
 recognize this new location and have no problem, right?

I successfully have realized how nice ZFS is with locating the proper
location of the disk across different controllers/ports. Besides for
rpool - ZFS boot. Moving those creates a huge PITA.


Now quick question - if I have a raidz2 named 'tank' already I can
expand the pool by doing:

zpool attach tank raidz2 device1 device2 device3 ... device7

It will make 'tank' larger and each group of disks (vdev? or zdev?)
will be dual parity. It won't create a mirror, will it?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Can this be done?

2009-04-07 Thread Michael Shadle
On Tue, Apr 7, 2009 at 5:22 PM, Bob Friesenhahn
bfrie...@simple.dallas.tx.us wrote:

 No.  The two vdevs will be load shared rather than creating a mirror. This
 should double your multi-user performance.

Cool - now a followup -

When I attach this new raidz2, will ZFS auto rebalance data between
the two, or will it keep the other one empty and do some sort of load
balancing between the two for future writes only?

Is there a way (perhaps a scrub? or something?) to get the data spread
around to both?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Can this be done?

2009-04-01 Thread Michael Shadle
I'm going to try to move one of my disks off my rpool tomorrow (since
it's a mirror) to a different controller.

According to what I've heard before, ZFS should automagically
recognize this new location and have no problem, right?

Or do I need to do some sort of detach/etc. process first?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Can this be done?

2009-03-31 Thread Michael Shadle
On Tue, Mar 31, 2009 at 1:31 AM, Scott Lawson
scott.law...@manukau.ac.nz wrote:

 No. There is no way to expand a RAIDZ or RAIDZ2 at this point. It is a
 feature that is often discussed
 and people would like, but has been seen by Sun as more of a feature home
 users would like rather2
 than enterprise users. Enterprise users are expected to buy a 4 or more
 disks and create another RAIDZ2
 vdev and add it to the pool to increase space. You would of course have this
 option..

Yeah, I get it. It definately would seem to be more for the lower-cost
market, since enterprises have $$ :)

 However by the time that you fill it there might be a solution. Adam
 Leventhal proposed a way that
 this could be implemented on his blog, so I suspect at some point in the
 next few years somebody will
 implement it and you will possible have the option to do so then. (after and
 OS and ZFS version upgrade)

 http://blogs.sun.com/ahl/entry/expand_o_matic_raid_z

Well - years / etc. is not my timeline... in a few years I could buy a
normal-size chassis and put 4TB disks in there and not care about my
physical limitations :)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Can this be done?

2009-03-30 Thread Michael Shadle
My only question is is how long it takes to resilver... Supposedly the  
entire array has to be checked which means 6x1.5tb. It has a quad core  
CPU that's basically dedicated to it. Anyone have any estimates?


Sounds like it is a lot slower than a normal raid5 style rebuild. Is  
there a way to tune it so it can rebuild/resilver faster?


On Mar 29, 2009, at 9:43 PM, Monish Shah mon...@indranetworks.com  
wrote:



Hello David and Michael,

Well I might back up the more important stuff offsite. But in  
theory it's all replaceable. Just would be a pain.


And what is the cost of the time to replace it versus the price of  
a  hard disk? Time ~ money.


This is true, but there is one counterpoint.  If you do raidz2, you  
are definitely paying for extra disk(s).  If you stay with raidz1,  
the cost of the time to recover the data would be incurred if and  
only if he has a failure in raidz1 followed by a second failure  
during the re-build process. So, the statistically correct thing to  
do is to multiply the cost of recovery by the probability and see if  
that exceeds the cost of the new drives.


To be really accurate, the cost of raidz2 option should also include  
the cost of moving the data from the existing raidz1 to the new  
raidz2 and then re-formatting the raidz1 into raidz2.


However, all this calculating is probably not worthwhile.  My  
feeling is: it's just a home video server and Michael still has the  
original media (I think).  Raidz1 is good enough.


Monish


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Can this be done?

2009-03-30 Thread Michael Shadle
On Mon, Mar 30, 2009 at 4:00 PM, David Magda dma...@ee.ryerson.ca wrote:

 There is a background process in ZFS (see scrub in zpool(1M)) that goes
 through and make sure all the checksums match reality (and corrects things
 if it can). It's reading all the data, but unlike hardware RAID arrays, it
 only checks the actual space used.

 It basically goes through the file system structure hierarchy, and if
 there's an unused space on the array it doesn't bother--since no data blocks
 point to it, there's nothing to check. The scrubbing process is the same
 whether you're using mirrors, RAID-Z1, or RAID-Z2. It can be kicked off
 manually or you can launch it via cron / SMF.

 Not sure about tuning (e.g., allocating bandwidth / priority). If you start
 a scrub the output of zpool status will give the progress (%) and ETA to
 finish.

 There is (was?) a bug where creating a new snapshot reset the scrub.

Well basically I am trying to analyze giving up 1/7th of my space for
the off chance that one drive fails during resilvering. I just don't
know what kind of time to expect for a resilver. I'm sure it also
depends on the build of nevada too and various bugs...

Normally it seems like raid5 is perfectly fine for a workoad like this
but maybe I'd sleep better at night knowing I could have 2 disks fail,
but the odds of that are pretty slim. I've never had 2 disks fail, and
if I did, the whole array is probably failed / the actual unit itself
got damaged, and then probably more than the 2 disks have been
destroyed anyway.

Looks like there are two open requests for speeding up and slowing
down the resilvering process already. So it does not sound like you
can tune it. But it would be nice to have some sort of number to
expect.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Can this be done?

2009-03-30 Thread Michael Shadle
On Mon, Mar 30, 2009 at 4:13 PM, Michael Shadle mike...@gmail.com wrote:

 Well basically I am trying to analyze giving up 1/7th of my space for
 the off chance that one drive fails during resilvering. I just don't
 know what kind of time to expect for a resilver. I'm sure it also
 depends on the build of nevada too and various bugs...

 Normally it seems like raid5 is perfectly fine for a workoad like this
 but maybe I'd sleep better at night knowing I could have 2 disks fail,
 but the odds of that are pretty slim. I've never had 2 disks fail, and
 if I did, the whole array is probably failed / the actual unit itself
 got damaged, and then probably more than the 2 disks have been
 destroyed anyway.

 Looks like there are two open requests for speeding up and slowing
 down the resilvering process already. So it does not sound like you
 can tune it. But it would be nice to have some sort of number to
 expect.

Well after all this discussion I think I've come to the conclusion:

I think I will just create 2 zpools. One called duo or dual or
something, and one called single or some other creative/latin/etc.
word for it.

The stuff that is easy to replace just by re-ripping it to disk (other
than the time and effort to do so) I will keep on that raidz1 one.

The new disks I'll make into a raidz2, and keep the more
important/harder to find stuff/backups on the raidz2.

I just don't know if I want to go with setting up raidz2, moving
everything off the existing one (9tb or so) to the new raidz2 +
another temporary area and re-do the existing raidz1 into raidz2. I'm
not sure it's -that- important. If my chassis supported more disks, I
wouldn't be as frugal, but I have limited space and I would like to
squeeze more space out of it if I can.

Sounds like a reasonable idea, no?

Follow up question: can I add a single disk to the existing raidz2
later on (if somehow I found more space in my chassis) so instead of a
7 disk raidz2  (5+2) it becomes a 6+2 ?

Thanks...
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Can this be done?

2009-03-29 Thread Michael Shadle
Okay so riddle me this - can I create a raidz2 using the new disks and  
move all the data from the existing zdev to it. Then recreate a raidz2  
this time using the old 7 disks ?


And have them all stay in the same Zpool?

Side note: does the port I plug the drive into matter on the  
controller? Does it have to be the same drive lineup or does it work  
based on drive uuid or something like that?


On Mar 29, 2009, at 8:58 AM, David Magda dma...@ee.ryerson.ca wrote:


On Mar 29, 2009, at 00:41, Michael Shadle wrote:

Well I might back up the more important stuff offsite. But in  
theory it's all replaceable. Just would be a pain.


And what is the cost of the time to replace it versus the price of a  
hard disk? Time ~ money.


There used to be a time when I like fiddling with computer parts. I  
now have other, more productive ways of wasting my time. :)



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Can this be done?

2009-03-29 Thread Michael Shadle
On Sun, Mar 29, 2009 at 10:35 AM, David Magda dma...@ee.ryerson.ca wrote:

 Create new pool, move data to it (zfs send/recv), destroy old RAID-Z1 pool.

Would send/recv be more efficient than just a massive rsync or related?

Also I'd have to reduce the data on my existing raidz1 as it is almost
full, and the raidz2 it would be sending to would be 1.5tb smaller
technically.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Can this be done?

2009-03-29 Thread Michael Shadle
On Sun, Mar 29, 2009 at 1:59 PM, Brent Jones br...@servuhome.net wrote:

 I'd personally say send/recv would be more efficient, rsync is awfully
 slow on large data sets. But, it depends what build you are using!
 BugID 6418042 (slow zfs send/recv) was fixed in build 105, it impacted
 send/recv operations local to remote, not sure if it happens local to
 local, but I experienced it doing local-remote send/recv.

 Not sure the best way to handle moving data around, when space is
 tight though...

Well one thing is - I've never used send/recv before first off, and
I'm comfortable with rsync - and rsync 3.x is a hell of a lot more
efficient too with large amounts of files. Although most of these are
large files, not large file counts.

I'd probably try to upgrade this to snv_110 at the same time and
update the zpool format too while I'm at it. Hopefully it would
resolve any possible oddities... and not introduce new ones. Like how
I can't install snv_110 on my other machine properly, it just gives me
a grub prompt on reboot, it doesn't seem to install zfs root properly
or something.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Can this be done?

2009-03-28 Thread Michael Shadle
I currently have a 7x1.5tb raidz1.

I want to add phase 2 which is another 7x1.5tb raidz1

Can I add the second phase to the first phase and basically have two
raid5's striped (in raid terms?)

Yes, I probably should upgrade the zpool format too. Currently running
snv_104. Also should upgrade to 110.

If that is possible, would anyone happen to have the simple command
lines to do it quick? I assume I'd be creating another raidz1 and then
somehow growing the tank zpool?

Does this make sense, or is this stupid from a performance
perspective? Should I just have two separate zpools? Ideally I would
like to have one massive data storage target. I'd be fine with somehow
changing this into a raidz2 as well, I suppose, since I had planned on
it being another raidz1 anyway.

Or, perhaps I could add tank #2 as a raidz2, and then move all the
data off tank #1, and then add disks individually from tank #1 until I
have all 14 disks in a single raidz2? Performance is not an absolute
must - I can deal with a little bit of overhead.

Thanks in advance.


[r...@nas01 ~]# zpool status
  pool: rpool
 state: ONLINE
status: The pool is formatted using an older on-disk format.  The pool can
still be used, but some features are unavailable.
action: Upgrade the pool using 'zpool upgrade'.  Once this is done, the
pool will no longer be accessible on older software versions.
 scrub: none requested
config:

NAME  STATE READ WRITE CKSUM
rpool ONLINE   0 0 0
  mirror  ONLINE   0 0 0
c0t0d0s0  ONLINE   0 0 0
c0t1d0s0  ONLINE   0 0 0

errors: No known data errors

  pool: tank
 state: ONLINE
status: The pool is formatted using an older on-disk format.  The pool can
still be used, but some features are unavailable.
action: Upgrade the pool using 'zpool upgrade'.  Once this is done, the
pool will no longer be accessible on older software versions.
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
tankONLINE   0 0 0
  raidz1ONLINE   0 0 0
c2t0d0  ONLINE   0 0 0
c0t2d0  ONLINE   0 0 0
c0t3d0  ONLINE   0 0 0
c0t4d0  ONLINE   0 0 0
c0t5d0  ONLINE   0 0 0
c0t6d0  ONLINE   0 0 0
c0t7d0  ONLINE   0 0 0

errors: No known data errors
[r...@nas01 ~]# zpool list
NAMESIZE   USED  AVAILCAP  HEALTH  ALTROOT
rpool   149G  11.4G   138G 7%  ONLINE  -
tank   9.50T  9.34T   159G98%  ONLINE  -
[r...@nas01 ~]#
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Can this be done?

2009-03-28 Thread Michael Shadle
On Sat, Mar 28, 2009 at 4:30 AM, Peter Tribble peter.trib...@gmail.com wrote:

 so I can add individual disks to the existing tank zpool anytime i want?

 Yes, but you wouldn't want to do that. (And zpool might not like it.)

 If you just add a disk, it just gets added as a new device. So you have
 unprotected storage.

so you're saying i should add 7 disks to match the existing setup (or
at least 2 disks so it has some sort of redundancy)

and i would run zpool add tank raidz1 disk1 disk2 disk3 disk4 disk5
disk6 disk7 ... if my goal is to use 7 disks. this would allow it to
become part of one large storage pool with two identical types of
redundancy setups (separate from each other, like two physically
different raidsets combined, though, which is fine)

 In particular, you can't grow the existing raidz. What you're doing
 here is adding
 a second raidz1 vdev. That's good because the 2nd phase of your storage is
 just like the first phase.

i guess this is redundant, but would i be able to see these as one
large storage pool, or would i essentially have tank and tank2?

is there a way to combine them? just the command above?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Can this be done?

2009-03-28 Thread Michael Shadle
2009/3/28 Tim t...@tcsac.net:

 There is no harm from using a raidz2 vdev even if an existing vdev is only
 raidz1.  If raidz2 is an available option then it is wise to choose it.  Of
 course starting out with raidz2 would have been even better.

 #2: raidz2 isn't always wise to choose.  It's a matter of performance,
 space, security requirements.  7+1 is fine for raidz1.  If he was pushing 10
 data disks that'd be another story.

if i went raidz2 i'd want the entire 14 disk array in it i think.

i'd rather not do a raidz2 with less than 100% of the disks and then a
second raidz1 (or 2) because i'd wind up losing much more disk space.
essentially, i am willing to give up 2 of 14 disks (roughly of course)
to parity.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Can this be done?

2009-03-28 Thread Michael Shadle
Well this is for a home storage array for my dvds and such. If I have  
to turn it off to swap a failed disk it's fine. It does not need to be  
highly available and I do not need extreme performance like a database  
for example. 45mb/sec would even be acceptable.


On Mar 28, 2009, at 10:47 AM, Bob Friesenhahn bfrie...@simple.dallas.tx.us 
 wrote:



On Sat, 28 Mar 2009, Michael Shadle wrote:

if i went raidz2 i'd want the entire 14 disk array in it i think.

i'd rather not do a raidz2 with less than 100% of the disks and  
then a

second raidz1 (or 2) because i'd wind up losing much more disk space.
essentially, i am willing to give up 2 of 14 disks (roughly of  
course)

to parity.


Hopefully you consider all of the costs before making this sort of  
decision.  If you are a lousy tipper you can't expect very good  
service the next time you come to visit. :-)


If 14 disks cost a lot, then you should carefully balance the cost  
of the wasted disk against the cost of lost performance or the  
cost of lost availability.  In many business environments, the  
potential for lost availability more than justifies purchasing more  
wasted disk.  In many business environments, the potential for  
lousy performance more than justifies purchasing more wasted  
disk.  Any good businessman should be able to specify a dollars per  
hour cost to the business if the storage is not available, or  
unable to provide sufficient performance to meet business needs.


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Can this be done?

2009-03-28 Thread Michael Shadle



On Mar 28, 2009, at 5:22 PM, Bob Friesenhahn bfrie...@simple.dallas.tx.us 
 wrote:



On Sat, 28 Mar 2009, Michael Shadle wrote:

Well this is for a home storage array for my dvds and such. If I  
have to turn it off to swap a failed disk it's fine. It does not  
need to be highly available and I do not need extreme performance  
like a database for example. 45mb/sec would even be acceptable.


I can see that 14 disks costs a lot for a home storage array but to  
you the data on your home storage array may be just as important as  
data on some businesses enterprise storage array.  In fact, it may  
be even more critical since it seems unlikely that you will have an  
effective backup system in place like large businesses do.




Well I might back up the more important stuff offsite. But in theory  
it's all replaceable. Just would be a pain.


Could I setup a raidz2 on the new zdev then destroy the old one and  
then raidz2 that technically if I want? Then both sets would have  
double redundancy, if I was feeling paranoid. But raid5 has served  
people well for a long time... Is resilvering speed roughly the same  
as a raid5 controller rebuild?


The main problem with raidz1 is that if a disk fails and you replace  
it, that if a second disk substantially fails during resilvering  
(which needs to successfully read all data on remaining disks) then  
your ZFS pool (or at least part of the files) may be toast.  The  
more data which must be read during resilvering, the higher the  
probability that there will be a failure.  If 12TB of data needs to  
be read to resilver a 1TB disk, then that


This is good info to know. I guess I'm willing to take the risk of a  
resilver. It's got a dedicated quad core proc doing nothing else than  
exporting samba and zfs... I wonder how long it would take.





In order to lessen risk, you can schedule a periodic zfs scrub via a  
cron job so that there is less probabily of encountering data which  
can not be read.  This will not save you from entirely failed disk  
drives though.




I do a weekly scrub and an fmadm faulty every 5 or 10 mins to email me  
if anything comes up...

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss