Re: [zfs-discuss] Hard Drive Choice Question

2012-06-11 Thread Russ Price

On 05/17/2012 04:10 AM, Ian Collins wrote:

I wouldn't be too fussed about 7x24 rating in a home server.

I still have a set of 10 regular Seagate drives I bought in 2007 that were
spinning non stop for four years in a very hostile environment (my garage!).
They simply refuse to die and I'm still using them in various test systems.



Of course, your mileage will vary. Lately I've been having better luck with my 
500 GB WD Blue drives, while in two systems I have recently replaced dying 
Segate 500 GB drives (7200.11) from 2009. One was part of a ZFS mirror, and the 
other a Linux mdraid mirror.


The drive in the OI box had a very large number of reallocated sectors and 
finally showed a bad SMART status (but no ZFS checksum errors on scrub). The 
drive in the Linux system started throwing read errors, yet had few reallocated 
sectors and still showed "PASSED" in the SMART data. Worse, it FAILED to 
reallocate sectors!


If you're running 2009-vintage 7200.11 Barracudas, be prepared to replace them 
soon if you haven't already done so.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Assessing health/performance of individual drives in ZFS pool

2011-04-07 Thread Russ Price

On 04/05/2011 03:01 PM, Tomas Ögren wrote:

On 05 April, 2011 - Joe Auty sent me these 5,9K bytes:

Has this changed, or are there any other techniques I can use to check
the health of an individual SATA drive in my pool short of what ZFS
itself reports?


Through scsi compat layer..

socker:~# smartctl -a -d scsi /dev/rdsk/c0t0d0s0


Note that you can get more complete information by using "-d sat,12" than by 
using "-d scsi". This works for me on both the onboard AHCI ports and the SATA 
drives connected to my Intel SASUC8I (LSI-based) HBA.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Configuration questions for Home File Server (CPU cores, dedup, checksum)?

2010-09-07 Thread Russ Price

On 09/07/2010 05:58 PM, Eric D. Mudama wrote:

How are you measuring using 60% across all four cores?

I kicked off a scrub just to see, and we're scrubbing at 200MB/s (2
vdevs) and the CPU is 94% idle, 6% kernel, 0% IOWAIT.

zpool-tank is using 3.2% CPU as shown by 'ps aux | grep tank'


Whoops... I misspoke - it should have been about 23-25% per core. I'm getting 
old. :o) I am using gkrellm to watch the CPU usage.


In any case, a scrub uses wildly different amounts of CPU at different times, 
and sometimes it uses far less (particularly early in the process, at least on 
my specific RAIDZ2). On the other hand, dd'ing a 23 GB video file on the RAIDZ2 
to /dev/null will consistently get the 23-25% per core figure.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Configuration questions for Home File Server (CPU cores, dedup, checksum)?

2010-09-07 Thread Russ Price

On 09/07/2010 03:58 PM, Craig Stevenson wrote:

I am working on a home file server.  After reading a wide range of blogs and 
forums, I have a few questions that are still not clear to me

1.  Is there a benefit in having quad core CPU (e.g. Athlon II X4 vs X2)? All 
of the web blogs seem to suggest using lower-wattage dual core CPUs.  But; with 
the recent advent of dedup, SHA256 checksum, etc., I am now wondering if 
opensolaris is better served with quad core.


With a big RAIDZ3, it's well worth having extra cores. A scrub on my eight-disk 
RAIDZ2 uses about 60% of all four cores on my Athlon II X4 630.


With smaller pools, dual-core would be OK.

If you're going to use dedup, you might want to go with an eight-disk RAIDZ2 and 
an SSD for L2ARC, instead of a nine-disk RAIDZ3.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] New Supermicro SAS/SATA controller: AOC-USAS2-L8e in SOHO NAS and HD HTPC

2010-08-16 Thread Russ Price

On 08/16/2010 10:35 AM, Freddie Cash wrote:

On Mon, Aug 16, 2010 at 7:13 AM, Mike DeMarco  wrote:

What I would really like to know is why do pci-e raid controller cards cost 
more than an entire motherboard with processor. Some cards can cost over $1,000 
dollars, for what.


Because they include a motherboard and processor.  :)  The high-end
RAID controllers include their own CPUs and RAM for doing all the RAID
stuff in hardware.

The low-end RAID controllers (if you can even really call them RAID
controllers) do all the RAID stuff in software via a driver installed
in the OS, running on the host computer's CPU.

And the ones in the middle have "simple" XOR engines for doing the
RAID.stuff in hardware.




And the irony is that the expensive hardware RAID controllers really aren't a 
good idea for ZFS. For a ZFS application, you're far better off to use a simple 
HBA in JBOD mode, and such HBAs can be had in the $100-$200 range.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Opensolaris is apparently dead

2010-08-14 Thread Russ Price

On 08/13/2010 10:21 PM, Edward Ned Harvey wrote:


Very few people would bother paying for solaris/zfs if they couldn't try it
for free and get a good taste of what it's valuable for.



My guess is that the theoretical Solaris Express 11 will be crippled by any or 
all of: missing features, artificial limits on functionality, or a restrictive 
license. I consider the latter most likely, much like the OTN downloads of 
Oracle DB, where you can download and run it for development purposes, but don't 
even THINK of using it as a production server for your home or small business. 
Of course, an Oracle DB is overkill for such a purpose anyway, but that's a 
different kettle of fish.


For me, Solaris had zero mindshare since its beginning, on account of being 
prohibitively expensive. When OpenSolaris came out, I basically ignored it once 
I found out that it was not completely open source, since I figured that there 
was too great a risk of a train wreck like we have now. Then, I decided this 
winter to give ZFS a spin, decided I liked it, and built a home server around it 
- and within weeks Oracle took over, tore up the tracks without telling anybody, 
and made the train wreck I feared into a reality. I should have listened to my 
own advice.


As much as I'd like to be proven wrong, I don't expect SX11 to be useful for my 
purposes, so my home file server options are:


1. Nexenta Core. It's maintained, and (somewhat) more up-to-date than the late 
OpenSolaris. As I've been running Linux since the days when a 486 was a 
cutting-edge system, I don't mind having a GNU userland. Of course, now that 
Oracle has slammed the door, it'll be difficult for it to move forward - which 
leads to:


2. IllumOS. In 20/20 hindsight, a project like this should have begun as soon as 
OpenSolaris first came out the door, but better late than never. In the short 
term, it's not yet an option, but in the long term, it may be the best (or only) 
hope. At the very least, I won't be able to use it until an open mpt driver is 
in place.


3. Just stick with b134. Actually, I've managed to compile my way up to b142, 
but I'm having trouble getting beyond it - my attempts to install later versions 
just result in new boot environments with the old kernel, even with the latest 
pkg-gate code in place. Still, even if I get the latest code to install, it's 
not viable for the long term unless I'm willing to live with stasis.


4. FreeBSD. I could live with it if I had to, but I'm not fond of its packaging 
system; the last time I tried it I couldn't get the package tools to pull a 
quick binary update. Even IPS works better. I could go to the ports tree 
instead, but if I wanted to spend my time recompiling everything, I'd run Gentoo 
instead.


5. Linux/FUSE. It works, but it's slow.
5a. Compile-it-yourself ZFS kernel module for Linux. This would be a hassle 
(though DKMS would make it less of an issue), but usable - except that the 
current module only supports zvols, so it's not ready yet, unless I wanted to 
run ext3-on-zvol. Neither of these solutions are practical for booting from ZFS.


6. Abandon ZFS completely and go back to LVM/MD-RAID. I ran it for years before 
switching to ZFS, and it works - but it's a bitter pill to swallow after 
drinking the ZFS Kool-Aid.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Confused about consumer drives and zfs can someone help?

2010-07-24 Thread Russ Price

On 07/23/2010 02:39 AM, tomwaters wrote:
> Re the CPU, do not go low power Atom etc, go a newish
> Core2 duo...the power differential at idle is bugger all
> and when you want to use the nas, ZFS will make good use
> of the CPU.

Good advice - ZFS can use quite a lot of CPU cycles. A low-end AMD quad-core is 
another good choice here. Even better, the AMD chips support ECC RAM, though you 
must get a motherboard with a BIOS that supports it. I have an Athlon II X4 630 
in my setup, and it has plenty of horsepower for an eight-disk RAIDZ2.


> re. cards...I use and recommend these 8-Port SUPERMICRO
> AOC-USASLP-L8I UIO SAS. They are cheap on e-bay, just work
> and are fast. Use them.

Keep in mind you'll have to modify the bracket to make the card fit (unless 
you're using a Supermicro UIO case). Another good alternative with the same 
chipset is the Intel SASUC8I. This is sold as a bare card, and you need to order 
the fan-out cables separately.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] mpt hotswap procedure

2010-05-19 Thread Russ Price
I'm not having any luck hotswapping a drive attached to my Intel SASUC8I 
(LSI-based) controller. The commands which work for the AMD AHCI ports don't 
work for the LSI. Here's what "cfgadm -a" reports with all drives installed and 
operational:


Ap_Id  Type Receptacle   Occupant Condition
c4 scsi-sas connectedconfigured   unknown
c4::dsk/c4t0d0 disk connectedconfigured   unknown
c4::dsk/c4t1d0 disk connectedconfigured   unknown
c4::dsk/c4t2d0 disk connectedconfigured   unknown
c4::dsk/c4t3d0 disk connectedconfigured   unknown
c4::dsk/c4t4d0 disk connectedconfigured   unknown
c4::dsk/c4t5d0 disk connectedconfigured   unknown
c4::dsk/c4t6d0 disk connectedconfigured   unknown
c4::dsk/c4t7d0 disk connectedconfigured   unknown
sata0/0::dsk/c5t0d0disk connectedconfigured   ok
sata0/1::dsk/c5t1d0disk connectedconfigured   ok
sata0/2::dsk/c5t2d0disk connectedconfigured   ok
sata0/3::dsk/c5t3d0disk connectedconfigured   ok
sata0/4::dsk/c5t4d0disk connectedconfigured   ok
sata0/5::dsk/c5t5d0disk connectedconfigured   ok

[irrelevant USB entries snipped]

Now, if I yank out a drive on one of the AHCI ports (let's use port 3 as an 
example), I can use:


cfgadm -c connect sata0/3
cfgadm -c configure sata0/3

and bring the new drive online. I have had no luck with the SASUC8I; even though 
I can see messages in the system log that a drive was inserted, the only way 
I've been able to actually use the drive afterwards has been via a reboot.  A 
command like:


cfgadm -c connect c4::dsk/c4t4d0

will be greeted with the message:

cfgadm: Hardware specific failure: operation not supported for SCSI device

Is "cfgadm -c connect c4" sufficient, or is there some other incantation I'm 
missing? :)


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] b134 - Mirrored rpool won't boot unless both mirrors are present

2010-03-28 Thread Russ Price
> This problem is known an fixed in later builds:
> 
> 
> http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6923585
> 
> AFAIK it is going to be included into b134a as well

OK, I just did some checking, and my rpool was already set up with 
autoreplace=off. It's necessary to use the -r boot flag as well; that works. 
Hopefully, b134a will actually have the fix.

Once I use -r, the boot proceeds at normal speed - no need to wait ridiculous 
amounts of time as Tim suggested.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] b134 - Mirrored rpool won't boot unless both mirrors are present

2010-03-27 Thread Russ Price
> What build?  How long have you waited for the boot?  It
> almost sounds to me like it's waiting for the
> drive and hasn't timed out before you give up and
> power it off.

I waited about three minutes. This is a b134 installation.

One one of my tests, I tried shoving the removed mirror into the hotswap bay, 
and got a console message indicating that the device was detected, but that 
didn't make the boot complete. I restarted the system with the drive present, 
and everything's fine.

How long should I expect to wait if a drive is missing? It shouldn't take more 
than 30 seconds, IMHO.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] b134 - Mirrored rpool won't boot unless both mirrors are present

2010-03-27 Thread Russ Price
I have two 500 GB drives on my system that are attached to built-in SATA ports 
on my Asus M4A785-M motherboard, running in AHCI mode. If I shut down the 
system, remove either drive, and then try to boot the system, it will fail to 
boot. If I disable the splash screen, I find that it will display the SunOS 
banner and the hostname, but it never gets as far as the "Reading ZFS config:" 
stage. GRUB is installed on both drives, and if both drives are present, I can 
flip the boot order in the BIOS and still have it boot successfully. I can even 
move one of the mirrors to a different SATA port and still have it boot. But if 
a mirror is missing, forget it. I can't find any log entries in 
/var/adm/messages about why it fails to boot, and the console is equally 
uninformative. If I check fmdump, it reports an empty fault log.

If I throw in a blank drive in place of one of the mirrors, the boot still 
fails. Needless to say, this pretty much makes the whole idea of mirroring 
rather useless.

Any idea what's really going wrong here?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Moving drives around...

2010-03-24 Thread Russ Price
> On Tue, March 23, 2010 12:00, Ray Van Dolson wrote:
> ZFS recognizes disks based on various ZFS special
> blocks written to them. 
> It also keeps a cache file on where things have been
> lately.  If you
> export a ZFS pool, swap the physical drives around,
> and import it,
> everything should be fine.  If you don't export
> first, you may have to
> give it a bit of help.  And there are pathological
> cases where for example
> you don't have a link in the /dev/dsk directory which
> can cause a default
> import to not find all the pieces of a pool.

Indeed. Before I wised up and bought an HBA for my RAIDZ2 array instead of 
using randomly-assorted SATA controllers, I tried rearranging some disks 
without exporting the pool first. I almost had a heart attack when the system 
came up reporting "corrupted data" on the drives that had been switched. As it 
turned out, I just needed to export and re-import the pool, and it was fine 
after that. Needless to say, when the HBA went in, I made sure to export the 
pool FIRST.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Intel SASUC8I - worth every penny

2010-03-12 Thread Russ Price
> Did you enable AHCI mode on _every_ SATA controller?
> 
> I have the exact opposite experience with 2 of your 3
> types of controllers.

It wasn't possible to do so, and that also made me think that a real HBA would 
work better. First off, with the AMD SB700/SB800 on-board ports, if I set the 
last two ports to AHCI mode, the BIOS doesn't even see drives there, and 
neither does OpenSolaris; the first four ports work fine in AHCI. The JMicron 
board came up in AHCI mode; it never, ever presents a BIOS of its own to change 
configuration. The Silicon Image board (one from SIIG) doesn't have an AHCI 
mode in its BIOS.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Intel SASUC8I - worth every penny

2010-03-11 Thread Russ Price
> Can you tell us the build
> version of the opensolaris?

I'm currently on b134 (but I had the performance issues with 2009.06, b130, 
b131, b132, and b133 as well).

I may end up swapping the Phenom II X2 550 with an Athlon II X4 630 that I've 
put into another M4A785-M system. I noticed that the eight-disk scrub came 
close to maxing out both cores of the Phenom - so it wouldn't hurt to have a 
couple more cores in place of the extra cache and clock speed. However, even 
with the CPU nearly pegged, it was still serving files smoothly - vastly better 
than the rag-tag controller assortment.

If you're going to run a big array on OpenSolaris, it's a good idea to use a 
real HBA instead of consumer-grade interfaces. :o) The nice thing about the 
SASUC8I / SAS3081E-R is that, by default, it presents the array to the 
operating system as individual drives - perfect for ZFS.

 scrub: scrub completed after 1h33m with 0 errors on Thu Mar 11 19:17:01 2010
config:

NAME   STATE READ WRITE CKSUM
tank   ONLINE   0 0 0
  raidz2-0 ONLINE   0 0 0
c11t1d0p1  ONLINE   0 0 0
c11t7d0p1  ONLINE   0 0 0
c11t6d0p1  ONLINE   0 0 0
c11t4d0p1  ONLINE   0 0 0
c11t0d0p1  ONLINE   0 0 0
c11t5d0p1  ONLINE   0 0 0
c11t2d0p1  ONLINE   0 0 0
c11t3d0p1  ONLINE   0 0 0

errors: No known data errors
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Intel SASUC8I - worth every penny

2010-03-11 Thread Russ Price
I had recently started setting up a homegrown OpenSolaris NAS with a large 
RAIDZ2 pool, and had found its RAIDZ2 performance severely lacking - more like 
downright atrocious. As originally set up:

* Asus M4A785-M motherboard
* Phenom II X2 550 Black CPU
* JMB363-based PCIe X1 SATA card (2 ports)
* SII3132-based PCIe X1 SATA card (2 ports)
* Six on-board SATA ports

Two 500 GB drives (one Seagate, one WD) serve as the root pool, and have 
performed admirably. The other eight 500 GB drives (4 Seagate, 4 WD, in a 
RAIDZ2 configuration) performed quite poorly, with lots of long freezeups and 
no error messages. Even streaming a 48 kHz/24-bit FLAC via CIFS would 
occasionally freeze for 5-10 seconds, with no other load on the file server. 
Such freezeups became far more likely with other activity - forget about 
streaming video if a scrub was going on, for instance. These pauses were NOT 
accompanied by any CPU activity. If I watched what the array was doing using 
GKrellM, I could see the pauses.

I started to get the feeling that I was running into a bad I/O bottleneck. I 
don't know how many PCIe lanes are being used by the onboard ports, and I'm now 
of the opinion that two-port PCIe X1 SATA cards are a Very Bad Idea for 
OpenSolaris. Today, I replaced the motley assortment of controllers with an 
Intel SASUC8I to handle the RAIDZ2 array, leaving the root pool on two of the 
onboard ports. Having already had a heart-attack moment last week after 
rearranging drives, *this* time I knew to do a "zpool export" before powering 
the system down. :O

The card worked out-of-the-box, with no extra configuration required. WOW, what 
a difference! I tried a minor stress-test: viewing some 720p HD video on one 
system via NFS, while streaming music via CIFS to my XP desktop. Not a single 
pause or stutter - smooth as silk. Just for kicks, I upped the ante and started 
a scrub on the RAIDZ2. No problem! Finally, it works like it should!

The scrub is going about twice as fast overall, with none of the herky-jerky 
action I was getting using the mix-and-match SATA interfaces.

Interestingly about the SASUC8I - the name "Intel" doesn't occur anywhere on 
the card. It's basically a repackaged LSI SAS3081E-R card (it's even labeled as 
such on the card itself and on the antistatic bag), and came just as a card in 
a box with an additional low-profile bracket for those with 1U cases - no 
driver CD or cables. I knew that it didn't come with cables, and ordered them 
separately. If I had ordered the LSI kit with cables from the same supplier, it 
would have cost about $80 more than getting the SASUC8I and cables separately.

If you're building a NAS, and have a PCIe X8 or X16 slot handy, this card is 
well worth it. Leave the two-port cheapies for workstations.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss