Re: [zfs-discuss] Failure of Quicktime *.mov files after move to zfs disk

2009-08-21 Thread Scott Laird
Checksum all of the files using something like md5sum and see if
they're actually identical.  Then test each step of the copy and see
which one is corrupting your files.

On Fri, Aug 21, 2009 at 1:43 PM, Harry Putnamrea...@newsguy.com wrote:
 During the course of backup I had occassion to copy a number of
 quicktime video (*.mov) files to zfs server disk.

 Once there... navigating to them with quicktime player and opening
 results in a failure that (From windows Vista laptop) says:
    error --43: A file could not be found (Welcome.mov)

 I would have attributed it to some problem from scping it to the zfs
 server had it not been for finding that if I scp it to a linux server
 the problem does not occur.

 Both the zfs and linux (Gentoo) servers are on a home lan.. but using
 the same router/switch[s] over gigabit network adaptors.

 On both occasions the files were copied using cygwin/ssh on a Vista
 laptop.

 Anyone have an idea what might cause this.

 Any more details I can add that would make diagnostics easier?

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS pegging the system

2009-07-17 Thread Scott Laird
Have each node record results locally, and then merge pair-wise until
a single node is left with the final results?  If you can do merges
that way while reducing the size of the result set, then that's
probably going to be the most scalable way to generate overall
results.

On Thu, Jul 16, 2009 at 10:51 AM, Jeff Hafermanj...@haferman.com wrote:

 We have a SGE array task that we wish to run with elements 1-7.
 Each task generates output and takes roughly 20 seconds to 4 minutes
 of CPU time.  We're doing them on a machine with about 144 8-core nodes,
 and we've divvied the job up to do about 500 at a time.

 So, we have 500 jobs at a time writing to the same ZFS partition.

 What is the best way to collect the results of the task? Currently we
 are having each task write to STDOUT and then are combining the
 results. This nails our ZFS partition to the wall and kills
 performance for other users of the system.  We tried setting up a
 MySQL server to receive the results, but it couldn't take 1000
 simultaneous inbound connections.

 Jeff

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs on 32 bit?

2009-06-26 Thread Scott Laird
It's actually worse than that--it's not just recent CPUs without VT
support.  Very few of Intel's current low-price processors, including
the Q8xxx quad-core desktop chips, have VT support.

On Wed, Jun 24, 2009 at 12:09 PM, rolandno-re...@opensolaris.org wrote:
Dennis is correct in that there are significant areas where 32-bit
systems will remain the norm for some time to come.

 think of that hundreds of thousands of VMWare ESX/Workstation/Player/Server 
 installations on non VT capable cpu`s - even if the cpu has 64bit capability, 
 a VM cannot run in 64bit mode the cpu is missing VT support. And VT isn`t 
 available for so long, and still there are even recent CPUs which don`t have 
 VT support
 --
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Sun's flash offering(s)

2009-04-19 Thread Scott Laird
On Sun, Apr 19, 2009 at 10:20 AM, David Magda dma...@ee.ryerson.ca wrote:
 Looking at the web site for Sun's SSD storage products, it looks like what's
 been offered is the so-called Logzilla:

        http://www.sun.com/storage/flash/specs.jsp

You know, those specs look almost *identical* to the Intel X25-E.  Is
this actually the STEC device, or just a rebranded Intel SSD?  Not
that there's anything wrong with the Intel or anything, but if you
were going to buy it it'd probably be dramatically cheaper buying it
from someone other than Sun, if Sun's service contract, etc, wasn't
important to you.

Compare the URL above with this one:

  http://www.intel.com/design/flash/nand/extreme/index.htm


Scott
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS vs HardWare raid - data integrity?

2009-01-08 Thread Scott Laird
RAID 2 is something weird that no one uses, and really only exists on
paper as part of Berkeley's original RAID paper, IIRC.  raidz2 is more
or less RAID 6, just like raidz is more or less RAID 5.  With raidz2,
you have to lose 3 drives per vdev before data loss occurs.


Scott

On Thu, Jan 8, 2009 at 7:01 AM, Orvar Korvar
knatte_fnatte_tja...@yahoo.com wrote:
 Thank you. How does raidz2 compare to raid-2? Safer? Less safe?
 --
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS + OpenSolaris for home NAS?

2009-01-08 Thread Scott Laird
Today?  Low-power SSDs are probably less reliable than low-power hard
drives, although they're too new to really know for certain.  Given
the number of problems that vendors have had getting acceptable write
speeds, I'd be really amazed if they've done any real work on
long-term reliability yet.  Going forward, SSDs will almost certainly
be more reliable, as long as you have something SMART-ish watching the
number of worn-out SSD cells and recommending preemptive replacement
of worn-out drives every few years.  That should be a slow,
predictable process, unlike most HD failures.


Scott

On Thu, Jan 8, 2009 at 2:30 PM, JZ j...@excelsioritsolutions.com wrote:
 I was think about Apple's new SSD drive option on laptops...

 is that safer than Apple's HD or less safe? [maybe Orvar can help me on
 this]

 the price is a bit hefty for me to just order for experiment...
 Thanks!
 z at home


 - Original Message - From: Toby Thain t...@telegraphics.com.au
 To: JZ j...@excelsioritsolutions.com
 Cc: Scott Laird sc...@sigkill.org; Brandon High bh...@freaks.com;
 zfs-discuss@opensolaris.org; Peter Korn peter.k...@sun.com
 Sent: Thursday, January 08, 2009 5:25 PM
 Subject: Re: [zfs-discuss] ZFS + OpenSolaris for home NAS?



 On 7-Jan-09, at 9:43 PM, JZ wrote:

 ok, Scott, that sounded sincere. I am not going to do the pic thing  on
 you.

 But do I have to spell this out to you -- somethings are invented  not
 for
 home use?

 Cindy, would you want to do ZFS at home,

 Why would you disrespect your personal data? ZFS is perfect for home  use,
 for reasons that have been discussed on this list and elsewhere.

 Apple also recognises this, which is why ZFS is in OS X 10.5 and will
 presumably become the default boot filesystem.

 Sorry to wander a little offtopic, but IMHO - Apple needs to  acknowledge,
 and tell their customers, that hard drives are  unreliable consumables.

 I am desperately looking forward to the day when they recognise the  need
 to ship all their systems with:
 1) mirrored storage out of the box;
 2) easy user-swappable drives;
 3) foolproof fault notification and rectification.

 There is no reason why an Apple customer should not have this level  of
 protection for her photo and video library, Great American Novel,  or
 whatever. Time Machine is a good first step (though it doesn't  often work
 smoothly for me with a LaCie external FW drive).

 These are the neglected pieces, IMHO, of their touted Digital Lifestyle.

 --Toby


 or just having some wine and music?

 Can we focus on commercial usage?
 please!



 - Original Message -
 From: Scott Laird sc...@sigkill.org
 To: Brandon High bh...@freaks.com
 Cc: zfs-discuss@opensolaris.org; Peter Korn peter.k...@sun.com
 Sent: Wednesday, January 07, 2009 9:28 PM
 Subject: Re: [zfs-discuss] ZFS + OpenSolaris for home NAS?


 On Wed, Jan 7, 2009 at 4:53 PM, Brandon High bh...@freaks.com  wrote:

 On Wed, Jan 7, 2009 at 7:45 AM, Joel Buckley joel.buck...@sun.com
 wrote:

 How much is your time worth?

 Quite a bit.

 Consider the engineering effort going into every Sun Server.
 Any system from Sun is more than sufficient for a home server.
 You want more disks, then buy one with more slots.  Done.

 A few years ago, I put together the NAS box currently in use at home
 for $300 for 1TB of space. Mind you, I recycled the RAM from another
 box and the four 250GB disks were free. I think 250 drives were  around
 $200 at the time, so let's say the system price was $1200.

 I don't think there's a Sun server that takes 4+ drives anywhere  near
 $1200. The X4200 uses 2.5 drives, but costs $4255. Actually adding
 more drives ups the cost further. That means the afternoon I spent
 setting my server up was worth $3000. I should tell my boss that.

 A more reasonable comparison would be the Ultra 24. A system with
 4x250 drives is $1650. I could build a 4 TB system today for *less*
 than my 1TB system of 2 years ago, so let's use 3x750 + 1x250  drives.
 (That's all the store will let me) and the price jumps to $2641.

 Assume that I buy the cheapest x64 system (the X2100 M2 at $1228)  and
 add a drive tray because I want 4 drives ... well I can't. The
 cheapest drive tray is $7465.

 I have trouble justifying Sun hardware for many business  applications
 that don't require SPARC, let alone for the home. For custom systems
 that most tinkerers would want at home, a shop like Silicon  Mechanics
 (http://www.siliconmechanics.com/) (or even Dell or HP) is almost
 always a better deal on hardware.

 I agree completely.  About a year ago I spent around $800 (w/o  drives)
 on a NAS box for home.  I used a 4x PCI-X single-Xeon Supermicro  MB, a
 giant case, and a single 8-port Supermicro SATA card.  Then I dropped
 a pair of 80 GB boot drives and 9x 500 GB drives into it.  With  raidz2
 plus a spare, that gives me around 2.7T of usable space.  When I
 filled that up a few weeks back, I bought 2 more 8-port SATA cards, 2
 Supermicro CSE-M35T-1B 5-drive hot

Re: [zfs-discuss] ZFS + OpenSolaris for home NAS?

2009-01-08 Thread Scott Laird
You can't trust any hard drive.  That's what backups are for :-).

Laptop hard drives aren't much worse than desktop drives, and 2.5
SATA drives are cheap.  As long as they're easy to swap, then a drive
failure isn't the end of the world.  Order a new drive ($100 or so),
swap them, and restore from backup.

I haven't dealt with PC laptops in years, so I can't really compare models.


Scott

On Thu, Jan 8, 2009 at 2:40 PM, JZ j...@excelsioritsolutions.com wrote:
 Thanks Scott,
 I was really itchy to order one, now I just want to save that open $ for
 Remy+++.

 Then, next question, can I trust any HD for my home laptop? should I go get
 a Sony VAIO or a cheap China-made thing would do?
 big price delta...

 z at home

 - Original Message - From: Scott Laird sc...@sigkill.org
 To: JZ j...@excelsioritsolutions.com
 Cc: Toby Thain t...@telegraphics.com.au; Brandon High
 bh...@freaks.com; zfs-discuss@opensolaris.org; Peter Korn
 peter.k...@sun.com; Orvar Korvar knatte_fnatte_tja...@yahoo.com
 Sent: Thursday, January 08, 2009 5:36 PM
 Subject: Re: [zfs-discuss] ZFS + OpenSolaris for home NAS?


 Today?  Low-power SSDs are probably less reliable than low-power hard
 drives, although they're too new to really know for certain.  Given
 the number of problems that vendors have had getting acceptable write
 speeds, I'd be really amazed if they've done any real work on
 long-term reliability yet.  Going forward, SSDs will almost certainly
 be more reliable, as long as you have something SMART-ish watching the
 number of worn-out SSD cells and recommending preemptive replacement
 of worn-out drives every few years.  That should be a slow,
 predictable process, unlike most HD failures.


 Scott

 On Thu, Jan 8, 2009 at 2:30 PM, JZ j...@excelsioritsolutions.com wrote:

 I was think about Apple's new SSD drive option on laptops...

 is that safer than Apple's HD or less safe? [maybe Orvar can help me on
 this]

 the price is a bit hefty for me to just order for experiment...
 Thanks!
 z at home


 - Original Message - From: Toby Thain
 t...@telegraphics.com.au
 To: JZ j...@excelsioritsolutions.com
 Cc: Scott Laird sc...@sigkill.org; Brandon High bh...@freaks.com;
 zfs-discuss@opensolaris.org; Peter Korn peter.k...@sun.com
 Sent: Thursday, January 08, 2009 5:25 PM
 Subject: Re: [zfs-discuss] ZFS + OpenSolaris for home NAS?



 On 7-Jan-09, at 9:43 PM, JZ wrote:

 ok, Scott, that sounded sincere. I am not going to do the pic thing  on
 you.

 But do I have to spell this out to you -- somethings are invented  not
 for
 home use?

 Cindy, would you want to do ZFS at home,

 Why would you disrespect your personal data? ZFS is perfect for home
 use,
 for reasons that have been discussed on this list and elsewhere.

 Apple also recognises this, which is why ZFS is in OS X 10.5 and will
 presumably become the default boot filesystem.

 Sorry to wander a little offtopic, but IMHO - Apple needs to
 acknowledge,
 and tell their customers, that hard drives are  unreliable consumables.

 I am desperately looking forward to the day when they recognise the need
 to ship all their systems with:
 1) mirrored storage out of the box;
 2) easy user-swappable drives;
 3) foolproof fault notification and rectification.

 There is no reason why an Apple customer should not have this level  of
 protection for her photo and video library, Great American Novel,  or
 whatever. Time Machine is a good first step (though it doesn't  often
 work
 smoothly for me with a LaCie external FW drive).

 These are the neglected pieces, IMHO, of their touted Digital Lifestyle.

 --Toby


 or just having some wine and music?

 Can we focus on commercial usage?
 please!



 - Original Message -
 From: Scott Laird sc...@sigkill.org
 To: Brandon High bh...@freaks.com
 Cc: zfs-discuss@opensolaris.org; Peter Korn peter.k...@sun.com
 Sent: Wednesday, January 07, 2009 9:28 PM
 Subject: Re: [zfs-discuss] ZFS + OpenSolaris for home NAS?


 On Wed, Jan 7, 2009 at 4:53 PM, Brandon High bh...@freaks.com wrote:

 On Wed, Jan 7, 2009 at 7:45 AM, Joel Buckley joel.buck...@sun.com
 wrote:

 How much is your time worth?

 Quite a bit.

 Consider the engineering effort going into every Sun Server.
 Any system from Sun is more than sufficient for a home server.
 You want more disks, then buy one with more slots.  Done.

 A few years ago, I put together the NAS box currently in use at home
 for $300 for 1TB of space. Mind you, I recycled the RAM from another
 box and the four 250GB disks were free. I think 250 drives were
 around
 $200 at the time, so let's say the system price was $1200.

 I don't think there's a Sun server that takes 4+ drives anywhere near
 $1200. The X4200 uses 2.5 drives, but costs $4255. Actually adding
 more drives ups the cost further. That means the afternoon I spent
 setting my server up was worth $3000. I should tell my boss that.

 A more reasonable comparison would be the Ultra 24. A system with
 4x250 drives is $1650. I could

Re: [zfs-discuss] ZFS + OpenSolaris for home NAS?

2009-01-07 Thread Scott Laird
On Wed, Jan 7, 2009 at 4:53 PM, Brandon High bh...@freaks.com wrote:
 On Wed, Jan 7, 2009 at 7:45 AM, Joel Buckley joel.buck...@sun.com wrote:
 How much is your time worth?

 Quite a bit.

 Consider the engineering effort going into every Sun Server.
 Any system from Sun is more than sufficient for a home server.
 You want more disks, then buy one with more slots.  Done.

 A few years ago, I put together the NAS box currently in use at home
 for $300 for 1TB of space. Mind you, I recycled the RAM from another
 box and the four 250GB disks were free. I think 250 drives were around
 $200 at the time, so let's say the system price was $1200.

 I don't think there's a Sun server that takes 4+ drives anywhere near
 $1200. The X4200 uses 2.5 drives, but costs $4255. Actually adding
 more drives ups the cost further. That means the afternoon I spent
 setting my server up was worth $3000. I should tell my boss that.

 A more reasonable comparison would be the Ultra 24. A system with
 4x250 drives is $1650. I could build a 4 TB system today for *less*
 than my 1TB system of 2 years ago, so let's use 3x750 + 1x250 drives.
 (That's all the store will let me) and the price jumps to $2641.

 Assume that I buy the cheapest x64 system (the X2100 M2 at $1228) and
 add a drive tray because I want 4 drives ... well I can't. The
 cheapest drive tray is $7465.

 I have trouble justifying Sun hardware for many business applications
 that don't require SPARC, let alone for the home. For custom systems
 that most tinkerers would want at home, a shop like Silicon Mechanics
 (http://www.siliconmechanics.com/) (or even Dell or HP) is almost
 always a better deal on hardware.

I agree completely.  About a year ago I spent around $800 (w/o drives)
on a NAS box for home.  I used a 4x PCI-X single-Xeon Supermicro MB, a
giant case, and a single 8-port Supermicro SATA card.  Then I dropped
a pair of 80 GB boot drives and 9x 500 GB drives into it.  With raidz2
plus a spare, that gives me around 2.7T of usable space.  When I
filled that up a few weeks back, I bought 2 more 8-port SATA cards, 2
Supermicro CSE-M35T-1B 5-drive hot-swap bays, and 9 1.5T drives, all
for under $2k.  That's around $0.25/GB for the expansion and $0.36
overall, including last year's expensive 500G drives.

The closest that I can come to this config using current Sun hardware
is probably the X4540 w/ 500G drives; that's $35k for 14T of usable
disk (5x 8-way raidz2 + 1 spare + 2 boot disks), $2.48/GB.  It's much
nicer hardware but I don't care.  I'd also need an electrician (for 2x
240V circuits), a dedicated server room in my house (for the fan
noise), and probably a divorce lawyer :-).

Sun's hardware really isn't price-competitive on the low end,
especially when commercial support offerings have no value to you.
There's nothing really wrong with this, as long as you understand that
Sun's really only going to be selling into shops where Sun's support
and extra engineering makes financial sense.  In Sun's defense, this
is kind of an odd system, specially built for unusual requirements.

My NAS box works well enough for me.  It's probably eaten ~20 hours of
my time over the past year, partially because my Solaris is really
rusty and partially because pkg has left me with broken, unbootable
systems twice :-(.  It's hard to see how better hardware would have
helped with that, though.


Scott
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS + OpenSolaris for home NAS?

2009-01-07 Thread Scott Laird
On Wed, Jan 7, 2009 at 6:43 PM, JZ j...@excelsioritsolutions.com wrote:
 ok, Scott, that sounded sincere. I am not going to do the pic thing on you.

 But do I have to spell this out to you -- somethings are invented not for
 home use?

Yeah, I'm sincere, but I've ordered more or less the same type of
hardware for commercial uses in the past.  There are a number of uses
for big, slow, cheap storage systems.  Disk-based backup is an easy
one--from a price/capacity standpoint, it's really hard to beat a
rackload of 4U systems stuffed full of cheap disks.

Not every application needs redundant power, multi-pathed disks,
highly-engineered servers, and a fleet of support engineers waiting
for your call.  In my experience, very few applications actually need
that--cheap, somewhat reliable systems with good replication and
failover usually beat enterprise-grade hardware anytime that the
cheaper hardware is even an option.  If you have a high transaction
rate, a need for perfect coherency and consistency, and failure is
expensive, then spending 3-10x the money for slightly higher
performance and slightly lower failure rates makes perfect sense.

Then again, I'm used to having enough quantity flying around to make
the cost differences worth it.  Spending 100 hours of staff time to
save $2k up front is dumb.  The last time I built commercial storage
servers like this, it took about two extra months of my time dealing
with vendors and qualifying hardware, but we shaved $250k off of a
$350k budget when the company was strapped for cash.  That was an easy
call.

It's all about quantifying your risks and knowing what you really
need.  In my experience, any time you can make software-based
replication do what you want, and you aren't paying massive per-server
software license fees, you're probably better off with a larger number
of cheaper systems vs. a smaller number of more expensive systems.


Scott
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Unable to add cache device

2009-01-02 Thread Scott Laird
I'm trying to add a pair of new cache devices to my zpool, but I'm
getting the following error:

# zpool add space cache c10t7d0
Assertion failed: nvlist_lookup_string(cnv, path, path) == 0, file
zpool_vdev.c, line 650
Abort (core dumped)

I replaced a failed disk a few minutes before trying this, so the
zpool is still resilvering.  The pool also has an existing cache
device, so this will be the second (with a third waiting at c10t6d0).

The error message is kind of opaque, and I don't have the ZFS source
handy to look at the assertion and see what it's checking.  Is this
caused by the resilvering or is something wrong?


Scott
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Unable to add cache device

2009-01-02 Thread Scott Laird
On Fri, Jan 2, 2009 at 4:52 PM, Akhilesh Mritunjai
mritun+opensola...@gmail.com wrote:
 As for source, here you go :)

 http://cvs.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/cmd/zpool/zpool_vdev.c#650

Thanks.  It's in the middle of get_replication, so I suspect it's a
bug--zpool tries to check on the replication status of existing vdevs
and croaks in the process.  As it turns out, I was able to add the
cache devices just fine once the resilver completed.

Out of curiosity, what's the easiest way to shove a file into the
L2ARC?  Repeated reads with dd if=file of=/dev/null doesn't appear to
do the trick.


Scott
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Unable to add cache device

2009-01-02 Thread Scott Laird
On Fri, Jan 2, 2009 at 8:54 PM, Richard Elling richard.ell...@sun.com wrote:
 Scott Laird wrote:

 On Fri, Jan 2, 2009 at 4:52 PM, Akhilesh Mritunjai
 mritun+opensola...@gmail.com wrote:


 As for source, here you go :)


 http://cvs.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/cmd/zpool/zpool_vdev.c#650


 Thanks.  It's in the middle of get_replication, so I suspect it's a
 bug--zpool tries to check on the replication status of existing vdevs
 and croaks in the process.  As it turns out, I was able to add the
 cache devices just fine once the resilver completed.


 It is a bug because the assertion failed.  Please file one.
 http://en.wikipedia.org/wiki/Assertion_(computing)
 http://bugs.opensolaris.org

 Out of curiosity, what's the easiest way to shove a file into the
 L2ARC?  Repeated reads with dd if=file of=/dev/null doesn't appear to
 do the trick.


 To put something in the L2ARC, it has to be purged from the ARC.
 So until you run out of space in the ARC, nothing will be placed into
 the L2ARC.

I have a ~50G working set and 8 GB of RAM, so I'm out of space in my
ARC.  My read rate is low enough for the disks to keep up, but I'd
like to see lower latency.  Also, 30G SSDs were cheap last week :-).

My big problem is that dd if=file of=/dev/null doesn't appear to
actually read the whole file--I can loop over 50G of data in about 20
seconds while doing under 100 MB/sec of disk I/O.  Does Solaris's dd
have some sort of of=/dev/null optimization?  Adding conv=swab seems
to be making it work better, but I'm still only seeing write rates of
~1 MB/sec per SSD, even though they're mostly empty.


Scott
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs zpool recommendation

2008-10-29 Thread Scott Laird
On Wed, Oct 29, 2008 at 3:42 PM, Mike [EMAIL PROTECTED] wrote:
 By Better I meant the best practice for a server running the Netbackup 
 application.

 I am not seeing how using raidz would be a performance hit. Usually stripes 
 perform faster than mirrors.

raidz performs reads from all devices in parallel, so you get 1
drive's worth of I/O operations, not 6 drives' worth.  With 3 mirrors,
you'd get 6 drives' worth of reads and 3 drives' worth of writes.
Using raidz might get you slightly better read and write bandwidth,
though.


Scott
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Verify files' checksums

2008-10-25 Thread Scott Laird
On Sat, Oct 25, 2008 at 1:57 PM, Marcus Sundman [EMAIL PROTECTED] wrote:
 I don't want to scrub several TiB of data just to verify a 2 MiB file. I
 want to verify just the data of that file. (Well, I don't mind also
 verifying whatever other data happens to be in the same blocks.)

Just read the file.  If the checksum is valid, then it'll read without
problems.  If it's invalid, then it'll be rebuilt (if you have
redundancy in your pool) or you'll get I/O errors (if you don't).


Scott
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Looking for some hardware answers, maybe someone on this list could help

2008-10-15 Thread Scott Laird
The onboard SATA ports work on the PDSME+.  One of these days I'm
going to pick up a couple of Supermicro's 5-in-3 enclosures for mine:

  http://www.newegg.com/Product/Product.aspx?Item=N82E16817121405


Scott

On Wed, Oct 15, 2008 at 12:26 AM, mike [EMAIL PROTECTED] wrote:
 Good news - I got snv_98 up without a hitch. So far, so good.

 Onboard video works great (well, console. Haven't used X11)
 Top NIC works great (e1000g) - haven't tried the second NIC
 Did not try the onboard SATA
 Two Supermicro AOC-SAT2-MV8 PCI-X's working well

 Here's the specifics:
 - LIAN LI PC-V2110B Black Aluminum ATX Full Tower Computer Case
 - PC Power  Cooling S75QB 750W ATX12V / EPS12V SLI NVIDIA SLI
 Certified (Dual 8800 -GTX and below) CrossFire Ready 80 PLUS Certified
 Active PFC Power Supply
 - SUPERMICRO MBD-PDSME+-O LGA 775 Intel 3010 ATX Server Motherboard
 - 2x Kingston 2GB 240-Pin DDR2 SDRAM DDR2 667 (PC2 5300) ECC
 Unbuffered Server Memory Model KVR667D2E5/2GI
 - 2x SUPERMICRO AOC-SAT2-MV8 64-bit PCI-X133MHz SATA Controller Card
 - 2x Seagate 160 gig for mirrored boot
 - 7x Seagate 1.5TB for data (second batch of 7 when I fill this batch)

 Just about all of it thanks to Newegg. I will need to pick up some
 4-in-3 enclosures and a better CPU heatsink/fan - this is supposed to
 be quiet but it has an annoying hum. Weird. Anyway, so far so good.
 Hopefully the power supply can handle all 16 disks too...

 On Thu, Oct 9, 2008 at 12:46 PM, mike [EMAIL PROTECTED] wrote:
 There's plenty of 8 port, either full 8 or 6+2 combinations etc.

 Anyway I went with a Supermicro PDSME+ which appears to work well
 according to the HCL, and bought two of the AOC-SAT2-MV8's and will
 just use those. It's actually being delivered today...

 On Thu, Oct 9, 2008 at 9:44 AM, Joe S [EMAIL PROTECTED] wrote:
 You may need an add-on SATA card. I haven't come across any 8 port 
 motherboards.

 As far as chipsets are concerned, take a look at something with the
 Intel X38 chipset. It's the only one of the desktop chipsets that
 supports ECC ram. Coincidentally, it's also the chipset used in the
 Sun Ultra 24 workstation
 (http://www.sun.com/desktop/workstation/ultra24/index.xml).


 On Mon, Oct 6, 2008 at 1:41 PM, mike [EMAIL PROTECTED] wrote:
 I posted a thread here...
 http://forums.opensolaris.com/thread.jspa?threadID=596

 I am trying to finish building a system and I kind of need to pick
 working NIC and onboard SATA chipsets (video is not a big deal - I can
 get a silent PCIe card for that, I already know one which works great)

 I need 8 onboard SATA. I would prefer Intel CPU. At least one gigabit
 port. That's about it.

 I built a list in that thread of all the options I found from the
 major manufacturers that Newegg has as the pool of possible
 chipsets/etc... any help is appreciated (anyone actually using any of
 these) - and remember I'm trying to use Nevada out of the box, not
 have to download specific drivers and tweak all this myself...
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss



 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Looking for some hardware answers, maybe someone on this list could help

2008-10-15 Thread Scott Laird
Oh, also I kind of doubt that a 750W power supply will spin 16 disks
up reliably.  I have 10 in mine with a 600W supply, and it's
borderline--10 drives work, 11 doesn't, and adding a couple extra PCI
cards has pushed mine over the edge before.  Most 3.5 drives want
about 30W at startup; that'd be around 780W with 16 drives.

I wish delayed spinup wasn't such a pain with SATA.


Scott

On Wed, Oct 15, 2008 at 3:27 PM, Scott Laird [EMAIL PROTECTED] wrote:
 The onboard SATA ports work on the PDSME+.  One of these days I'm
 going to pick up a couple of Supermicro's 5-in-3 enclosures for mine:

  http://www.newegg.com/Product/Product.aspx?Item=N82E16817121405


 Scott

 On Wed, Oct 15, 2008 at 12:26 AM, mike [EMAIL PROTECTED] wrote:
 Good news - I got snv_98 up without a hitch. So far, so good.

 Onboard video works great (well, console. Haven't used X11)
 Top NIC works great (e1000g) - haven't tried the second NIC
 Did not try the onboard SATA
 Two Supermicro AOC-SAT2-MV8 PCI-X's working well

 Here's the specifics:
 - LIAN LI PC-V2110B Black Aluminum ATX Full Tower Computer Case
 - PC Power  Cooling S75QB 750W ATX12V / EPS12V SLI NVIDIA SLI
 Certified (Dual 8800 -GTX and below) CrossFire Ready 80 PLUS Certified
 Active PFC Power Supply
 - SUPERMICRO MBD-PDSME+-O LGA 775 Intel 3010 ATX Server Motherboard
 - 2x Kingston 2GB 240-Pin DDR2 SDRAM DDR2 667 (PC2 5300) ECC
 Unbuffered Server Memory Model KVR667D2E5/2GI
 - 2x SUPERMICRO AOC-SAT2-MV8 64-bit PCI-X133MHz SATA Controller Card
 - 2x Seagate 160 gig for mirrored boot
 - 7x Seagate 1.5TB for data (second batch of 7 when I fill this batch)

 Just about all of it thanks to Newegg. I will need to pick up some
 4-in-3 enclosures and a better CPU heatsink/fan - this is supposed to
 be quiet but it has an annoying hum. Weird. Anyway, so far so good.
 Hopefully the power supply can handle all 16 disks too...

 On Thu, Oct 9, 2008 at 12:46 PM, mike [EMAIL PROTECTED] wrote:
 There's plenty of 8 port, either full 8 or 6+2 combinations etc.

 Anyway I went with a Supermicro PDSME+ which appears to work well
 according to the HCL, and bought two of the AOC-SAT2-MV8's and will
 just use those. It's actually being delivered today...

 On Thu, Oct 9, 2008 at 9:44 AM, Joe S [EMAIL PROTECTED] wrote:
 You may need an add-on SATA card. I haven't come across any 8 port 
 motherboards.

 As far as chipsets are concerned, take a look at something with the
 Intel X38 chipset. It's the only one of the desktop chipsets that
 supports ECC ram. Coincidentally, it's also the chipset used in the
 Sun Ultra 24 workstation
 (http://www.sun.com/desktop/workstation/ultra24/index.xml).


 On Mon, Oct 6, 2008 at 1:41 PM, mike [EMAIL PROTECTED] wrote:
 I posted a thread here...
 http://forums.opensolaris.com/thread.jspa?threadID=596

 I am trying to finish building a system and I kind of need to pick
 working NIC and onboard SATA chipsets (video is not a big deal - I can
 get a silent PCIe card for that, I already know one which works great)

 I need 8 onboard SATA. I would prefer Intel CPU. At least one gigabit
 port. That's about it.

 I built a list in that thread of all the options I found from the
 major manufacturers that Newegg has as the pool of possible
 chipsets/etc... any help is appreciated (anyone actually using any of
 these) - and remember I'm trying to use Nevada out of the box, not
 have to download specific drivers and tweak all this myself...
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss



 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Looking for some hardware answers, maybe someone on this list could help

2008-10-15 Thread Scott Laird
On Wed, Oct 15, 2008 at 4:12 PM, Will Murnane [EMAIL PROTECTED] wrote:
 On Wed, Oct 15, 2008 at 18:30, Scott Laird [EMAIL PROTECTED] wrote:
 Oh, also I kind of doubt that a 750W power supply will spin 16 disks
 up reliably.  I have 10 in mine with a 600W supply, and it's
 borderline--10 drives work, 11 doesn't, and adding a couple extra PCI
 cards has pushed mine over the edge before.
 Power supply stress survival is more a function of dollars paid (or
 pounds weighed, if you like) than of any of the numbers on the box.
 I've done 14 drives on a 550W power supply (with no problems).
 Reputable places to search for power supply reviews are [1] and [2]
 (and others---but those are a good start).

 Most 3.5 drives want
 about 30W at startup; that'd be around 780W with 16 drives.
 I'm not sure what kind of math you're using here.

See 
http://www.seagate.com/staticfiles/support/disc/manuals/desktop/Barracuda%207200.11/100452348b.pdf

Seagate claims 2.8A @ 12V per drive at startup.  That's 33.6W.  The
operating draw is way lower--the last time I measured my E2160 + 10
disk system drew around 130W while idling and not a whole lot more
while active.


Scott
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Which is better for root ZFS: mlc or slc SSD?

2008-09-24 Thread Scott Laird
In general, I think SLC is better, but there are a number of brand-new
MLC devices on the market that are really fast; until a new generation
of SLC devices show up, the MLC drives kind of win by default.

Intel's supposed to have a SLC drive showing up early next year that
has similar read performance to their new MLC device, but with 2x the
write speed, but that's at least 3 months out.


Scott

On Wed, Sep 24, 2008 at 12:16 PM, Neal Pollack [EMAIL PROTECTED] wrote:
 Tim wrote:

 On Wed, Sep 24, 2008 at 1:41 PM, Erik Trimble [EMAIL PROTECTED] wrote:

 I was under the impression that MLC is the preferred type of SSD, but I
 want to prevent myself from having a think-o.


 I'm looking to get (2) SSD to use as my boot drive. It looks like I can
 get 32GB SSDs composed of either SLC or MLC for roughly equal pricing.
 Which would be the better technology?  (I'll worry about rated access
 times/etc of the drives, I'm just wondering about general tech for an OS
 boot drive usage...)


 Depends on the MFG.  The new Intel MLC's have proven to be as fast if not
 faster than the SLC's,

 That is not comparing apples to apples.   The new Intel MLCs take the
 slower, lower cost MLC chips,
 and put them in parallel channels connected to an internal controller chip
 (think of RAID striping).
 That way, they get large aggregate speeds for less total cost.
 Other vendors will start to follow this idea.

 But if you just take a raw chip in one channel, SLC is faster.

 And, in the end, yes, the new intel SSDs are very nice.

 but they also cost just as much.  If they brought the price down, I'd say
 MLC all the way.  All other things being equal though, SLC.


 --Tim

 
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] hardware for zfs home storage

2008-01-14 Thread Scott Laird
I have an Asus P5K WS motherboard with a cheap Core 2 Duo CPU (E2140,
$70 or so) and one of the cheap SuperMicro 8-port PCI-X SATA cards.
That gives you 14 supported SATA ports.  Throw 4 GB of RAM into it
(~$100) and then either use 500 GB or 750 GB drives.  One of the
Seagate 750s is down to $155 this week, which puts it close enough to
the 500s ($90-120) that it might be worth considering.  I threw
everything into a Lian Li PC-V2000A Plus II case, which is kind of
pricy (compared to cheap PC cases, not compared to STK hardware :-)
but holds 12 drives without any problem at all, and 20 drives with a
bit of extra hardware.  Before drives, the whole system's well under
$1k, and it's been working perfectly for months now.

I'm using raidz2 across 8 drives, but if I had it to do again, I'd
probably just use mirroring.  Unfortunately, raidz2 kills your random
read and write performance, and that makes Time Machine really, really
slow.  I'm running low on space now, and considering throwing another
8 drives into the case in the spring, if I can find a cheap 8-port
PCI-E SATA CARD.  When that happens, I'll probably try to convert
everything to mirroring.


Scott

On Jan 14, 2008 8:33 AM, Alex [EMAIL PROTECTED] wrote:
 Hi,

 I'm sure this has been asked many times and though a quick search didn't 
 reveal anything illuminating, I'll post regardless.

 I am looking to make a storage system available on my home network. I need 
 storage space in the order of terabytes as I have a growing iTunes collection 
 and tons of MP3s that I converted from vinyl. At this time I am unsure of the 
 growth rate, but I suppose it isn't unreasonable to look for 4TB usable 
 storage. Since I will not be backing this up, I think I want RAIDZ2.

 Since this is for home use, I don't want to spend an inordinate amount of 
 money. I did look at the cheaper STK arrays, but they're more than what I 
 want to pay, so I am thinking that puts me in the white-box market. Power 
 consumption would be nice to keep low also.

 I don't really care if it's external or internal disks. Even though I don't 
 want to get completely skinned over the money, I also don't want to buy 
 something that is unreliable.

 I am very interested as to your thoughts and experiences on this. E.g. what 
 to buy, what to stay away from.

 Thanks in advance!


 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] hardware for zfs home storage

2008-01-14 Thread Scott Laird
Everything except the SuperMicro SATA card came from Newegg.  They
didn't have the card in stock at the time, so I ordered it from
buy.com.


Scott

On Jan 14, 2008 9:33 AM, Alex [EMAIL PROTECTED] wrote:
 Thanks a bunch! I'll look into this very config. Just one Q, where did you 
 get the case?



 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] hardware for zfs home storage

2008-01-14 Thread Scott Laird
Run 'defaults write com.apple.systempreferences
TMShowUnsupportedNetworkVolumes 1' as root.  I've been using it since
November without problems, but I haven't actually had to restore
anything in anger yet.

There's a rumor that Apple will be officially adding network support
to Time Machine this week, but who knows.


Scott

On Jan 14, 2008 9:40 AM, Arne Schwabe [EMAIL PROTECTED] wrote:
 Scott Laird schrieb:
  I have an Asus P5K WS motherboard with a cheap Core 2 Duo CPU (E2140,
  $70 or so) and one of the cheap SuperMicro 8-port PCI-X SATA cards.
  That gives you 14 supported SATA ports.  Throw 4 GB of RAM into it
  (~$100) and then either use 500 GB or 750 GB drives.  One of the
  Seagate 750s is down to $155 this week, which puts it close enough to
  the 500s ($90-120) that it might be worth considering.  I threw
  everything into a Lian Li PC-V2000A Plus II case, which is kind of
  pricy (compared to cheap PC cases, not compared to STK hardware :-)
  but holds 12 drives without any problem at all, and 20 drives with a
  bit of extra hardware.  Before drives, the whole system's well under
  $1k, and it's been working perfectly for months now.
 
  I'm using raidz2 across 8 drives, but if I had it to do again, I'd
  probably just use mirroring.  Unfortunately, raidz2 kills your random
  read and write performance, and that makes Time Machine really, really
  slow.  I'm running low on space now, and considering throwing another
  8 drives into the case in the spring, if I can find a cheap 8-port
  PCI-E SATA CARD.  When that happens, I'll probably try to convert
  everything to mirroring.
 
 
 Just a question how did you make time machine work on a network drive?

 Arne

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] hardware for zfs home storage

2008-01-14 Thread Scott Laird
I'm using smb.  Mount the share via the finder, then go to the time
machine pref pane, and it should show up.


Scott

On Jan 14, 2008 10:03 AM, Brian Hechinger [EMAIL PROTECTED] wrote:
 On Mon, Jan 14, 2008 at 09:52:38AM -0800, Scott Laird wrote:
  Run 'defaults write com.apple.systempreferences
  TMShowUnsupportedNetworkVolumes 1' as root.  I've been using it since
  November without problems, but I haven't actually had to restore
  anything in anger yet.

 I couldn't get that to work with NFS shares, has anyone else?

 -brian
 --
 Perl can be fast and elegant as much as J2EE can be fast and elegant.
 In the hands of a skilled artisan, it can and does happen; it's just
 that most of the shit out there is built by people who'd be better
 suited to making sure that my burger is cooked thoroughly.  -- Jonathan 
 Patschke

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] hardware for zfs home storage

2008-01-14 Thread Scott Laird
I've been tempted to get one of my neighbors to host a small box with
~4 drives and then either rsync or zfs send backups to it over wifi;
that'd protect against fire or theft, but not major earthquakes.  I
don't think we're at risk from any other obvious disasters.  The
up-front cost would be kind of steep, but sending 50 GB of new data at
a time would be trivial, unlike most online services.  With my current
DSL link, it'd take at least a week to ship 50 GB of data offsite, and
I have ~2 TB in use.  Even if I exclude some filesystems, it'd still
be a mess.  Of course, you have to be on good terms with your
neighbors for this to work.


Scott

On Jan 14, 2008 3:10 PM, Tim Cook [EMAIL PROTECTED] wrote:
 Another free.99 option if you have the extra hardware lying around is 
 boxbackup.

 http://www.boxbackup.org/

 I haven't used it personally, but heard good things.


 This message posted from opensolaris.org
 ___

 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS on OS X port now on macosforge

2008-01-09 Thread Scott Laird
On Jan 9, 2008 11:26 AM, Noël Dellofano [EMAIL PROTECTED] wrote:
  As soon as I get in to work and can backup my sparsebundle to a spare
  MBP, I'm going to start banging on it.

 Sweet deal :)

  So, do you have all of /Users on zfs, just one account, have you tried
  a FileVaulted account too? Or is that just crazy talk? :-)

 I currently just have one account, my personal one, to use ZFS.  Then
 I just have another local admin account that uses HFS+ that I don't
 really use for anything except occasional testing.  In my current
 setup, I created a pool, and I have 2 filesystems in it, one of which
 is my home directory.  Then I just created my  account and pointed it
 to use that directory for my home dir.
 I haven't experimented with File Vault yet at all, so feel free to
 have at it.  Hopefully when we get encryption for ZFS then we'll be
 able to just offer it natively that way.

So Leopard is able to use ZFS without any of the weird compatibility
problems that used to plague UFS users?


Scott
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS on OS X port now on macosforge

2008-01-09 Thread Scott Laird
Okay, great.  I've been enjoying ZFS on my home OpenSolaris box, and I
have a new multi-drive Mac in my near future, and I'd love to have ZFS
as an option at some point.


Scott

On Jan 9, 2008 4:26 PM, Noël Dellofano [EMAIL PROTECTED] wrote:
 As I mentioned, ZFS is still BETA, so there are (and likely will be)
 some issues turn up with compatibility with the upper layers of the
 system if that's what you're referring to.  But we're working hard on
 fixing these as they come up.   So end goal is there shouldn't be any
 weird compatibility issues with the rest system.

 Noel


 On Jan 9, 2008, at 2:38 PM, Scott Laird wrote:

  On Jan 9, 2008 11:26 AM, Noël Dellofano [EMAIL PROTECTED] wrote:
  As soon as I get in to work and can backup my sparsebundle to a
  spare
  MBP, I'm going to start banging on it.
 
  Sweet deal :)
 
  So, do you have all of /Users on zfs, just one account, have you
  tried
  a FileVaulted account too? Or is that just crazy talk? :-)
 
  I currently just have one account, my personal one, to use ZFS.  Then
  I just have another local admin account that uses HFS+ that I don't
  really use for anything except occasional testing.  In my current
  setup, I created a pool, and I have 2 filesystems in it, one of which
  is my home directory.  Then I just created my  account and pointed it
  to use that directory for my home dir.
  I haven't experimented with File Vault yet at all, so feel free to
  have at it.  Hopefully when we get encryption for ZFS then we'll be
  able to just offer it natively that way.
 
  So Leopard is able to use ZFS without any of the weird compatibility
  problems that used to plague UFS users?
 
 
  Scott


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Seperate ZIL

2007-12-06 Thread Scott Laird
On 12/6/07, Brian Hechinger [EMAIL PROTECTED] wrote:
 On Wed, Dec 05, 2007 at 06:12:18PM -0600, Al Hopper wrote:
 
  PS: LsiLogic just updated their SAS HBAs and have a couple of products
  very reasonably priced IMHO.  Combine that with a (single ?) Fujitsu
  MAX3xxxRC (where xxx represents the size) and you'll be wearing a big
  smile every time you work on a system so equipped.

 Hmmm, on second glace, 36G versions of that seem to be going for $40.

Do you mean $140, or am I missing a really good deal somewhere?


Scott
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best hardware

2007-11-10 Thread Scott Laird
I used the Asus P5K WS motherboard with 1 PCI-X slot and an Intel
E2140 CPU (Core 2 Duo, 1.6 GHz, 64 bits,  45W).  It works fine.  With
a 8 500 GB drives in a raidz2 array, I'm getting ~160 MB/sec writing
and 280 MB/sec reading.

See 
http://scottstuff.net/blog/articles/2007/10/20/notes-from-installing-opensolaris-snv_72

Samba talking to OS X is kind of slow, but that seems to be the Mac's
fault, and I haven't had time to do any tuning yet.


Scott

On 11/10/07, Matt [EMAIL PROTECTED] wrote:
 Hi all,

 I am currently planning a new home file server on a gigabit network that will 
 be utilizing ZFS (on SXDE).  The files will be shared via samba as I have a 
 mixed OS environment.  The controller card I will be using is the SuperMicro 
 SAT2-MV8 133MHz PCI-X card.  I have two options for CPUs/motherboards:

 AMD Athlon64 3000+ (64 bit)
 DFI LanParty UT 250gb (NForce 3 based) motherboard
 32 bit PCI slots only
 2GB RAM

 or

 Dual Intel Xeon 1.6GHz CPUs (32 bit)
 ASUS PCH-DL motherboard
 PCI-X slots @ 66MHz
 2GB RAM

 I am trying to figure out where my bottleneck will be for file transfers.  
 Will it be the controller card running in a regular PCI slot on the AMD 
 setup?  Will it be the 32 bit Intel system? Or will using samba overshadow 
 either of the hardware options?  Any suggestions would be greatly 
 appreciated.  Thanks.

 Matt


 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Yager on ZFS

2007-11-09 Thread Scott Laird
Most video formats are designed to handle errors--they'll drop a frame
or two, but they'll resync quickly.  So, depending on the size of the
error, there may be a visible glitch, but it'll keep working.

Interestingly enough, this applies to a lot of MPEG-derived formats as
well, like MP3.  I had a couple bad copies of MP3s that I tried to
listen to on my computer a few weeks ago (podcasts copied via
bluetooth off of my phone, apparently with no error checking), and it
made the story hard to follow when a few seconds would disappear out
of the middle, but it didn't destroy the file.


Scott

On 11/9/07, David Dyer-Bennet [EMAIL PROTECTED] wrote:
 can you guess? wrote:

  CERN was using relatively cheap disks and found that they were more than 
  adequate (at least for any normal consumer use) without that additional 
  level of protection:  the incidence of errors, even including the firmware 
  errors which presumably would not have occurred in a normal consumer 
  installation lacking hardware RAID, was on the order of 1 per TB - and 
  given that it's really, really difficult for a consumer to come anywhere 
  near that much data without most of it being video files (which just laugh 
  and keep playing when they discover small errors) that's pretty much 
  tantamount to saying that consumers would encounter no *noticeable* errors 
  at all.
 

 I haven't played with bit errors in video.  A bit error in a JPEG
 generally corrupts everything after that point.  And it's pretty easy
 for people to have a TB or so of image files of various sorts.
 Furthermore, I'm interested in archiving those for at least the rest of
 my life.

 Because I'm in touch with a number of professional photographers, who
 have far more pictures than I do, I think of 1TB as a level a lot of
 people are using in a non-IT context, with no professional sysadmin
 involved in maintaining or designing their storage schemes.

 I think all of these are good reasons why people *do* care about errors
 at the levels you mention.

 One of my photographer friends found a bad cable in one of his computers
 that was upping his error rate by an order of magnitude (to 10^-13 I
 think).  Having ZFS would have made this less dangerous, and detected it
 more quickly.

 Generally, I think you underestimate the amount of data some people
 have, and how much they care about it.  I can't imagine this will
 decrease significantly over the next decade, either.

 --
 David Dyer-Bennet, [EMAIL PROTECTED]; http://dd-b.net/
 Snapshots: http://dd-b.net/dd-b/SnapshotAlbum/
 Photos: http://dd-b.net/photography/gallery/
 Dragaera: http://dragaera.info

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Odd zpool status error

2007-11-01 Thread Scott Laird
I've had this happen once or twice now, running n74.  I'll run 'zpool
scrub' on my root pool and *immediately* get an error reported:

# zpool status -v tank
  pool: tank
 state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
entire pool from backup.
   see: http://www.sun.com/msg/ZFS-8000-8A
 scrub: scrub in progress, 0.76% done, 0h41m to go
config:

NAME  STATE READ WRITE CKSUM
tank  ONLINE   0 0 0
  mirror  ONLINE   0 0 0
c0t0d0s0  ONLINE   0 0 0
c3t0d0s0  ONLINE   0 0 0

errors: Permanent errors have been detected in the following files:

//dev/dsk/c0t0d0s0

I *assume* that this is talking about the /dev/dsk/c0t0d0s0 *file*,
not the device.  Removing the file immediately causes 'zpool status'
to report the error as 'tank/rootfs:0x3840d'.  Recreating the device
file at this point makes no difference.

No matter what I do, though, the pool is magically clean at the end of
the scrub.  Is this a known bug?


Scott
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Mac OS X 10.5.0 Leopard ships with a readonly ZFS

2007-10-26 Thread Scott Laird
My copy hasn't arrived yet, but look in the file menu in Disk Utility.

http://thinksecret.com/archives/leopard9a377a/source/25.html


Scott

On 10/26/07, Andy Lubel [EMAIL PROTECTED] wrote:

 Yeah im pumped about this new release today..  such harmony in my
 storage to be had.  now if OSX only had a native iscsi target/initiator!


 -Andy Lubel




 -Original Message-
 From: [EMAIL PROTECTED]
 [mailto:[EMAIL PROTECTED] On Behalf Of Peter Woodman
 Sent: Friday, October 26, 2007 8:14 AM
 To: Kugutsumen
 Cc: zfs-discuss@opensolaris.org
 Subject: Re: [zfs-discuss] Mac OS X 10.5.0 Leopard ships with a readonly
 ZFS

 it would seem that the reason that it's been pulled is that it's
 installed by default in the release version (9A581) - just tested it
 here, and willikers, it works!

 On 10/26/07, Kugutsumen [EMAIL PROTECTED] wrote:
  # zfs list
  ZFS Readonly implemntation is loaded!
  To download the full ZFS read/write kext with all functionality
  enabled, please go to http://developer.apple.com no datasets available
 
  Unfortunately, I can't find it on ADC yet and it seems that it was
 removed by Apple:
 
  Another turn in the Apple-ZFS saga. Apple has made available a
 developer preview of ZFS for Mac OS X with read/write capability. The
 preview is available to all ADC members. From the readme file: ZFS is a
 new filesystem from Sun Microsystems which has been ported by Apple to
 Mac OS X. The initial (10.5.0) release of Leopard will restrict ZFS to
 read-only, so no ZFS pools or filesystems can be modified or created.
 This Developer Preview will enable full read/write capability, which
 includes the creation/destruction of ZFS pools and filesystems. Update:
 Will it ever end? The release has been pulled from ADC by Apple.
 
  I can't wait to reformat all my external 2.5 drives with zfs.
 
 
  This message posted from opensolaris.org
  ___
  zfs-discuss mailing list
  zfs-discuss@opensolaris.org
  http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
 
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Limiting the power of zfs destroy

2007-10-23 Thread Scott Laird
I'm writing a couple scripts to automate backups and snapshots, and I'm
finding myself cringing every time I call 'zfs destroy' to get rid of a
snapshot, because a small typo could take out the original filesystem
instead of a snapshot.  Would it be possible to add a flag (maybe -t type)
to zfs destroy to limit its destructive power?


Scott
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZIL reliability/replication questions

2007-10-22 Thread Scott Laird
On 10/18/07, Neil Perrin [EMAIL PROTECTED] wrote:
 
  The umem one is unavailable, but the Gigabyte model is easy to find.
  I had Amazon overnight one to me, it's probably sitting at home right
  now.

 Cool let us know how it goes.

Not so well.  I was completely unable to get the card to work at all.
The motherboard's BIOS wouldn't even list the GC-RAMDISK during the
bus scan.  Solaris saw it, but couldn't talk to it:

Oct 20 12:50:54 fs2 ahci: [ID 632458 kern.warning] WARNING:
ahci_port_reset: port 1 the device hardware has been initialized and
the power-up diagnostics failed

The Supermicro 8-port SATA card's BIOS saw it, but Solaris reported
errors at boot time:

Oct 20 12:06:00 fs2 marvell88sx: [ID 748163 kern.warning] WARNING:
marvell88sx0: device on port 5 still busy after reset

I tried using it with the motherboard's Marvell-based eSATA ports, but
that made the POST hang for a minute or two and Solaris spewed errors
all over the console after boot.

I'm sending it back.


Scott
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZIL reliability/replication questions

2007-10-18 Thread Scott Laird
I'm debating using an external intent log on a new box that I'm about
to start working on, and I have a few questions.

1.  If I use an external log initially and decide that it was a
mistake, is there a way to move back to the internal log without
rebuilding the entire pool?
2.  What happens if the logging device fails completely?  Does this
damage anything else in the pool, other then potentially losing
in-flight transactions?
3.  What about corruption in the log?  Is it checksummed like the rest of ZFS?

Thanks.


Scott
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZIL reliability/replication questions

2007-10-18 Thread Scott Laird
On 10/18/07, Neil Perrin [EMAIL PROTECTED] wrote:


 Scott Laird wrote:
  I'm debating using an external intent log on a new box that I'm about
  to start working on, and I have a few questions.
 
  1.  If I use an external log initially and decide that it was a
  mistake, is there a way to move back to the internal log without
  rebuilding the entire pool?

 It's not currently possible to remove a separate log.
 This was working once, but was stripped out until the
 more generic zpool remove devices was provided.
 This is bug 6574286:

 http://bugs.opensolaris.org/view_bug.do?bug_id=6574286

Okay, so hopefully it'll work in a couple quarters?

  2.  What happens if the logging device fails completely?  Does this
  damage anything else in the pool, other then potentially losing
  in-flight transactions?

 This should work. It shouldn't even lose the in-flight transactions.
 ZFS reverts to using the main pool if a slog write fails or the
 slog fills up.

So, the only way to lose transactions would be a crash or power loss,
leaving outstanding transactions in the log, followed by the log
device failing to start up on reboot?  I assume that that would that
be handled relatively cleanly (files have out of data data), as
opposed to something nasty like the pool fails to start up.

  3.  What about corruption in the log?  Is it checksummed like the rest of 
  ZFS?

 Yes it's checksummed, but the checksumming is a bit different
 from the pool blocks in the uberblock tree.

 See also:
 http://blogs.sun.com/perrin/entry/slog_blog_or_blogging_on

That started this whole mess :-).  I'd like to try out using one of
the Gigabyte SATA ramdisk cards that are discussed in the comments.
It supposedly has 18 hours of battery life, so a long-term power
outage would kill the log.  I could reasonably expect one 18+ hour
power outage over the life of the filesystem.  I'm fine with losing
in-flight data (I'd expect the log to be replayed before the UPS shuts
the system down anyway), but I'd rather not lose the whole pool or
something extreme like that.

I'm willing to trade the chance of some transaction losses during an
exceptional event for more performance, but I'd rather not have to
pull out the backups if I can ever avoid it.


Scott
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZIL reliability/replication questions

2007-10-18 Thread Scott Laird
On 10/18/07, Neil Perrin [EMAIL PROTECTED] wrote:
  So, the only way to lose transactions would be a crash or power loss,
  leaving outstanding transactions in the log, followed by the log
  device failing to start up on reboot?  I assume that that would that
  be handled relatively cleanly (files have out of data data), as
  opposed to something nasty like the pool fails to start up.

 I just checked on the behaviour of this. The log is treated as part
 of the main pool. If it is not replicated and disappears then the pool
 can't be opened - just like any unreplicated device in the main pool.
 If the slog is found but can't be opened or is corrupted then then the
 pool will be opened but the slog isn't used.
 This seems a bit inconsistent.

Hmm, yeah.  What would happen if I mirrored the ramdisk with a hard
drive?  Would ZFS block until the data's stable on both devices, or
would it continue once the write is complete on the ramdisk?

Failing that, would replacing the missing log with a blank device let
me bring the pool back up, or would it be dead at that point?

  3.  What about corruption in the log?  Is it checksummed like the rest of 
  ZFS?
  Yes it's checksummed, but the checksumming is a bit different
  from the pool blocks in the uberblock tree.
 
  See also:
  http://blogs.sun.com/perrin/entry/slog_blog_or_blogging_on
 
  That started this whole mess :-).  I'd like to try out using one of
  the Gigabyte SATA ramdisk cards that are discussed in the comments.

 A while ago there was a comment on this alias that these cards
 weren't purchasable. Unfortunately, I don't know what is available.

The umem one is unavailable, but the Gigabyte model is easy to find.
I had Amazon overnight one to me, it's probably sitting at home right
now.


Scott
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss