Re: [zfs-discuss] latest zpool version in solaris 11 express

2011-07-20 Thread Rob Logan
plus virtualbox 4.1 with network in a box would like snv_159 from http://www.virtualbox.org/wiki/Changelog Solaris hosts: New Crossbow based bridged networking driver for Solaris 11 build 159 and above Rob ___

Re: [zfs-discuss] WarpDrive SLP-300

2010-11-17 Thread Rob Logan
BTW, any new storage-controller-related drivers introduced in snv151a? the 64bit driver in 147 -rwxr-xr-x 1 root sys 401200 Sep 14 08:44 mpt -rwxr-xr-x 1 root sys 398144 Sep 14 09:23 mpt_sas is a different size than 151a -rwxr-xr-x 1 root sys 400936 Nov 15

Re: [zfs-discuss] sharing a ssd between rpool and l2arc

2010-03-30 Thread Rob Logan
you can't use anything but a block device for the L2ARC device. sure you can... http://mail.opensolaris.org/pipermail/zfs-discuss/2010-March/039228.html it even lives through a reboot (rpool is mounted before other pools) zpool create -f test c9t3d0s0 c9t4d0s0 zfs create -V 3G rpool/cache

Re: [zfs-discuss] Sun Flash Accelerator F20 numbers

2010-03-30 Thread Rob Logan
if you disable the ZIL altogether, and you have a power interruption, failed cpu, or kernel halt, then you're likely to have a corrupt unusable zpool the pool will always be fine, no matter what. or at least data corruption. yea, its a good bet that data sent to your file or zvol will

Re: [zfs-discuss] SSD As ARC

2010-03-28 Thread Rob Logan
Can't you slice the SSD in two, and then give each slice to the two zpools? This is exactly what I do ... use 15-20 GB for root and the rest for an L2ARC. I like the idea of swapping on SSD too, but why not make a zvol for the L2ARC so your not limited by the hard partitioning?

Re: [zfs-discuss] SSD As ARC

2010-03-28 Thread Rob Logan
I like the idea of swapping on SSD too, but why not make a zvol for the L2ARC so your not limited by the hard partitioning? it lives through a reboot.. zpool create -f test c9t3d0s0 c9t4d0s0 zfs create -V 3G rpool/cache zpool add test cache /dev/zvol/dsk/rpool/cache reboot

Re: [zfs-discuss] Poor ZIL SLC SSD performance

2010-02-19 Thread Rob Logan
An UPS plus disabling zil, or disabling synchronization, could possibly achieve the same result (or maybe better) iops wise. Even with the fastest slog, disabling zil will always be faster... (less bytes to move) This would probably work given that your computer never crashes in an

Re: [zfs-discuss] Reading ZFS config for an extended period

2010-02-15 Thread Rob Logan
RFE open to allow you to store [DDT] on a separate top level VDEV hmm, add to this spare, log and cache vdevs, its to the point of making another pool and thinly provisioning volumes to maintain partitioning flexibility. taemun: hay, thanks for closing the loop!

Re: [zfs-discuss] Cores vs. Speed?

2010-02-06 Thread Rob Logan
I like the original Phenom X3 or X4 we all agree ram is the key to happiness. The debate is what offers the most ECC ram for the least $. I failed to realize the AM3 cpus accepted UnBuffered ECC DDR3-1333 like Lynnfield. To use Intel's 6 slots vs AMD 4 slots, one must use Registered ECC. So

Re: [zfs-discuss] Cores vs. Speed?

2010-02-05 Thread Rob Logan
if zfs overlaps mirror reads across devices. it does... I have one very old disk in this mirror and when I attach another element one can see more reads going to the faster disks... this past isn't right after the attach but since the reboot, but one can still see the reads are load balanced

Re: [zfs-discuss] Cores vs. Speed?

2010-02-05 Thread Rob Logan
Intel's RAM is faster because it needs to be. I'm confused how AMD's dual channel, two way interleaved 128-bit DDR2-667 into an on-cpu controller is faster than Intel's Lynnfield dual channel, Rank and Channel interleaved DDR3-1333 into an on-cpu controller.

Re: [zfs-discuss] Cores vs. Speed?

2010-02-04 Thread Rob Logan
I am leaning towards AMD because of ECC support well, lets look at Intel's offerings... Ram is faster than AMD's at 1333Mhz DDR3 and one gets ECC and thermal sensor for $10 over non-ECC http://www.newegg.com/Product/Product.aspx?Item=N82E16820139040 This MB has two Intel ethernets and for

Re: [zfs-discuss] verging OT: how to buy J4500 w/o overpriced drives

2010-02-02 Thread Rob Logan
true. but I buy a Ferrari for the engine and bodywork and chassis engineering. It is totally criminal what Sun/EMC/Dell/Netapp do charging its interesting to read this with another thread containing: timeout issue is definitely the WD10EARS disks. replaced 24 of them with ST32000542AS (f/w

Re: [zfs-discuss] Best 1.5TB drives for consumer RAID?

2010-01-24 Thread Rob Logan
a 1U or 2U JBOD chassis for 2.5 drives, from http://supermicro.com/products/nfo/chassis_storage.cfm the E1 (single) or E2 (dual) options have a SAS expander so http://supermicro.com/products/chassis/2U/?chs=216 fits your build or build it your self with

Re: [zfs-discuss] 4 Internal Disk Configuration

2010-01-14 Thread Rob Logan
By partitioning the first two drives, you can arrange to have a small zfs-boot mirrored pool on the first two drives, and then create a second pool as two mirror pairs, or four drives in a raidz to support your data. agreed.. 2 % zpool iostat -v capacity operations

[zfs-discuss] unable to zfs destroy

2010-01-08 Thread Rob Logan
this one has me alittle confused. ideas? j...@opensolaris:~# zpool import z cannot mount 'z/nukeme': mountpoint or dataset is busy cannot share 'z/cle2003-1': smb add share failed j...@opensolaris:~# zfs destroy z/nukeme internal error: Bad exchange descriptor Abort (core dumped)

Re: [zfs-discuss] Separate Zil on HDD ?

2009-12-02 Thread Rob Logan
2 x 500GB mirrored root pool 6 x 1TB raidz2 data pool I happen to have 2 x 250GB Western Digital RE3 7200rpm be better than having the ZIL 'inside' the zpool. listing two log devices (stripe) would have more spindles than your single raidz2 vdev.. but for low cost fun one might make a

Re: [zfs-discuss] Workaround for mpt timeouts in snv_127

2009-11-30 Thread Rob Logan
Chenbro 16 hotswap bay case. It has 4 mini backplanes that each connect via an SFF-8087 cable StarTech HSB430SATBK hmm, both are passive backplanes with one SATA tunnel per link... no SAS Expanders (LSISASx36) like those found in SuperMicro or J4x00 with 4 links per connection. wonder

Re: [zfs-discuss] scrub differs in execute time?

2009-11-14 Thread Rob Logan
P45 Gigabyte EP45-DS3P. I put the AOC card into a PCI slot I'm not sure how many half your disks are or how your vdevs are configured, but the ICH10 has 6 sata ports at 300MB and one PCI port at 266MB (that's also shared with the IT8213 IDE chip) so in an ideal world your scrub bandwidth

Re: [zfs-discuss] scrub differs in execute time?

2009-11-14 Thread Rob Logan
The ICH10 has a 32-bit/33MHz PCI bus which provides 133MB/s at half duplex. you are correct, I thought ICH10 used a 66Mhz bus, when infact its 33Mhz. The AOC card works fine in a PCI-X 64Bit/133Mhz slot good for 1,067 MB/s even if the motherboard uses a PXH chip via 8 lane PCIE.

Re: [zfs-discuss] raidz-1 vs mirror

2009-11-11 Thread Rob Logan
from a two disk (10krpm) mirror layout to a three disk raidz-1. wrights will be unnoticeably slower for raidz1 because of parity calculation and latency of a third spindle. but reads will be 1/2 the speed of the mirror because it can split the reads between two disks. another way to say the

Re: [zfs-discuss] PSARC recover files?

2009-11-09 Thread Rob Logan
frequent snapshots offer outstanding oops protection. Rob ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] PSARC recover files?

2009-11-09 Thread Rob Logan
Maybe to create snapshots after the fact how does one quiesce a drive after the fact? ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] sub-optimal ZFS performance

2009-10-29 Thread Rob Logan
So the solution is to never get more than 90% full disk space while that's true, its not Henrik's main discovery. Henrik points out that 1/4 of the arc is used for metadata, and sometime that's not enough.. if echo ::arc | mdb -k | egrep ^size isn't reaching echo ::arc | mdb -k | egrep ^c

Re: [zfs-discuss] zfs code and fishworks fork

2009-10-27 Thread Rob Logan
are you going to ask NetApp to support ONTAP on Dell systems, well, ONTAP 5.0 is built on freebsd, so it wouldn't be too hard to boot on dell hardware. Hay, at least it can do aggregates larger than 16T now... http://www.netapp.com/us/library/technical-reports/tr-3786.html

Re: [zfs-discuss] ZPOOL Metadata / Data Error - Help

2009-10-04 Thread Rob Logan
Action: Restore the file in question if possible. Otherwise restore the entire pool from backup. metadata:0x0 metadata:0x15 bet its in a snapshot that looks to have been destroyed already. try zpool clear POOL01 zpool scrub POOL01

Re: [zfs-discuss] bigger zfs arc

2009-10-02 Thread Rob Logan
zfs will use as much memory as is necessary but how is necessary calculated? using arc_summary.pl from http://www.cuddletech.com/blog/pivot/entry.php?id=979 my tiny system shows: Current Size: 4206 MB (arcsize) Target Size (Adaptive): 4207 MB (c) Min

Re: [zfs-discuss] The importance of ECC RAM for ZFS

2009-07-24 Thread Rob Logan
The post I read said OpenSolaris guest crashed, and the guy clicked the ``power off guest'' button on the virtual machine. I seem to recall guest hung. 99% of solaris hangs (without a crash dump) are hardware in nature. (my experience backed by an uptime of 1116days) so the finger is still

Re: [zfs-discuss] Another user looses his pool (10TB) in this case and 40 days work

2009-07-20 Thread Rob Logan
the machine hung and I had to power it off. kinda getting off the zpool import --tgx -3 request, but hangs are exceptionally rare and usually ram or other hardware issue, solairs usually abends on software faults. r...@pdm # uptime 9:33am up 1116 day(s), 21:12, 1 user, load average:

Re: [zfs-discuss] Understanding SAS/SATA Backplanes and Connectivity

2009-07-16 Thread Rob Logan
c4 scsi-bus connectedconfigured unknown c4::dsk/c4t15d0disk connectedconfigured unknown : c4::dsk/c4t33d0disk connectedconfigured unknown c4::es/ses0ESI connected

Re: [zfs-discuss] ZFS write I/O stalls

2009-06-30 Thread Rob Logan
CPU is smoothed out quite a lot yes, but the area under the CPU graph is less, so the rate of real work performed is less, so the entire job took longer. (allbeit smoother) Rob ___ zfs-discuss mailing list

Re: [zfs-discuss] ZFS and Dinamic Stripe

2009-06-29 Thread Rob Logan
try to be spread across different vdevs. % zpool iostat -v capacity operationsbandwidth pool used avail read write read write -- - - - - - - z686G 434G 40 5 2.46M 271K c1t0d0s7 250G 194G

Re: [zfs-discuss] problems with l2arc in 2009.06

2009-06-18 Thread Rob Logan
correct ratio of arc to l2arc? from http://blogs.sun.com/brendan/entry/l2arc_screenshots It costs some DRAM to reference the L2ARC, at a rate proportional to record size. For example, it currently takes about 15 Gbytes of DRAM to reference 600 Gbytes of L2ARC - at an 8 Kbyte ZFS record size.

Re: [zfs-discuss] Replacing HDD with larger HDD..

2009-05-22 Thread Rob Logan
zpool offline grow /var/tmp/disk01 zpool replace grow /var/tmp/disk01 /var/tmp/bigger_disk01 one doesn't need to offline before the replace, so as long as you have one free disk interface one can cfgadm -c configure sata0/6 each disk as you go... or you can offline and cfgadm each disk in the

Re: [zfs-discuss] RAIDZ2: only half the read speed?

2009-05-22 Thread Rob Logan
How does one look at the disk traffic? iostat -xce 1 OpenSolaris, raidz2 across 8 7200 RPM SATA disks: 17179869184 bytes (17 GB) copied, 127.308 s, 135 MB/s OpenSolaris, flat pool across the same 8 disks: 17179869184 bytes (17 GB) copied, 61.328 s, 280 MB/s one raidz2 set of 8 disks

Re: [zfs-discuss] SAS 15K drives as L2ARC

2009-05-05 Thread Rob Logan
use a bunch of 15K SAS drives as L2ARC cache for several TBs of SATA disks? perhaps... depends on the workload, and if the working set can live on the L2ARC used mainly as astronomical images repository hmm, perhaps two trays of 1T SATA drives all mirrors rather than raidz sets of one

[zfs-discuss] zpool import crash, import degraded mirror?

2009-04-29 Thread Rob Logan
When I type `zpool import` to see what pools are out there, it gets to /1: open(/dev/dsk/c5t2d0s0, O_RDONLY) = 6 /1: stat64(/usr/local/apache2/lib/libdevid.so.1, 0x08042758) Err#2 ENOENT /1: stat64(/usr/lib/libdevid.so.1, 0x08042758)= 0 /1: d=0x02D90002

Re: [zfs-discuss] Motherboard for home zfs/solaris file server

2009-02-24 Thread Rob Logan
Not. Intel decided we don't need ECC memory on the Core i7 I thought that was a Core i7 vs Xeon E55xx for socket LGA-1366 so that's why this X58 MB claims ECC support: http://supermicro.com/products/motherboard/Xeon3000/X58/X8SAX.cfm ___

Re: [zfs-discuss] SMART data

2008-12-08 Thread Rob Logan
the sata framework uses the sd driver so its: 4 % smartctl -d scsi -a /dev/rdsk/c4t2d0s0 smartctl version 5.36 [i386-pc-solaris2.8] Copyright (C) 2002-6 Bruce Allen Home page is http://smartmontools.sourceforge.net/ Device: ATA WDC WD1001FALS-0 Version: 0K05 Serial number: Device type:

Re: [zfs-discuss] Inexpensive ZFS home server

2008-11-12 Thread Rob Logan
I don't think the Pentium E2180 has the lanes to use ECC RAM. look at the north bridge, not the cpu.. the PowerEdge SC440 uses intel 3000 MCH which supports up to 8GB unbuffered ECC or non-ECC DDR2 667/533 SDRAM. its been replaced with the intel 32x0 that uses DDR2 800/667MHz unbuffered ECC /

Re: [zfs-discuss] ZFS + OpenSolaris for home NAS?

2008-10-29 Thread Rob Logan
ECC? $60 unbuffered 4GB 800MHz DDR2 ECC CL5 DIMM (Kit Of 2) http://www.provantage.com/kingston-technology-kvr800d2e5k2-4g~7KIN90H4.htm for Intel 32x0 north bridge like http://www.provantage.com/supermicro-x7sbe~7SUPM11K.htm ___ zfs-discuss mailing

Re: [zfs-discuss] zfs equivalent of ufsdump and ufsrestore

2008-05-30 Thread Rob Logan
I'd like to take a backup of a live filesystem without modifying the last accessed time. why not take a snapshot? Rob ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] zfs equivalent of ufsdump and ufsrestore

2008-05-30 Thread Rob Logan
Is there a way to efficiently replicating a complete zfs-pool including all filesystems and snapshots? zfs send -R -R Generate a replication stream package, which will replicate the specified filesystem, and

Re: [zfs-discuss] What is a vdev?

2008-05-30 Thread Rob Logan
making all the drives in a *zpool* the same size. The only issue of having vdevs of diffrent sizes is when one fills up, reducing the strip size for writes. making all the drives in a *vdev* (of almost any type) the same The only issue is the unused space of the largest device, but then we

Re: [zfs-discuss] slog failure ... *ANY* way to recover?

2008-05-30 Thread Rob Logan
1) and l2arc or log device needs to evacuation-possible how about evacuation of any vdev? (pool shrink!) 2) any failure of a l2arc or log device should never prevent importation of a pool. how about import or creation of any kinda degraded pool? Rob

Re: [zfs-discuss] is mirroring a vdev possible?

2008-05-30 Thread Rob Logan
replace a current raidz2 vdev with a mirror. your asking for vdev removal or pool shrink which isn't finish yet. Rob ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] ZFS in S10U6 vs openSolaris 05/08

2008-05-27 Thread Rob Logan
There is something more to consider with SSDs uses as a cache device. why use SATA as the interface? perhaps http://www.tgdaily.com/content/view/34065/135/ would be better? (no experience) cards will start at 80 GB and will scale to 320 and 640 GB next year. By the end of 2008, Fusion io also

Re: [zfs-discuss] zfs raidz2 configuration mistake

2008-05-21 Thread Rob Logan
1) Am I right in my reasoning? yes 2) Can I remove the new disks from the pool, and re-add them under the raidz2 pool copy the data off the pool, destroy and remake the pool, and copy back 3) How can I check how much zfs data is written on the actual disk (say c12)?

Re: [zfs-discuss] opensolaris 2008.05 boot recovery

2008-05-20 Thread Rob Logan
would do and booted from the CD. OK, now I zpool imported rpool, modified [], exported the pool, and rebooted. the oops part is the exported the pool as a reboot after editing would have worked as expected so rpool wasn't marked as exported so boot from the cdrom again, zpool import

Re: [zfs-discuss] question about zpool import

2008-05-20 Thread Rob Logan
type: zpool import 11464983018236960549 rpool.old zpool import -f mypool zpool upgrade -a zfs upgrade -a ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] replace, restart, gone - HELP!

2008-05-20 Thread Rob Logan
There's also a spare attached to the pool that's not showing here. can you make it show? Rob ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] replace, restart, gone - HELP!

2008-05-20 Thread Rob Logan
How do I go about making it show? zdb -e exported_pool_name will show the children's paths and find the path of the spare that's missing and once you get it to shows up you can import the pool. Rob ___ zfs-discuss mailing

Re: [zfs-discuss] cp -r hanged copying a directory

2008-05-02 Thread Rob Logan
or work around the NCQ bug in the drive's FW by typing: su echo set sata:sata_max_queue_depth = 0x1 /etc/system reboot Rob ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] cp -r hanged copying a directory

2008-05-01 Thread Rob Logan
hmm, three drives with 35 io requests in the queue and none active? remind me not to buy a drive with that FW.. 1) upgrade the FW in the drives or 2) turn off NCQ with: echo set sata:sata_max_queue_depth = 0x1 /etc/system Rob

Re: [zfs-discuss] cp -r hanged copying a directory

2008-04-28 Thread Rob Logan
I did the cp -r dir1 dir2 again and when it hanged when its hung, can you type: iostat -xce 1 in another window and is there a 100 in the %b column? when you reset and try the cp again, and look at iostat -xce 1 on the second hang, is the same disk at 100 in %b? if all your windows are hung,

Re: [zfs-discuss] zfs send/recv question

2008-03-06 Thread Rob Logan
Because then I have to compute yesterday's date to do the incremental dump. snaps=15 today=`date +%j` # to change the second day of the year from 002 to 2 today=`expr $today + 0` nuke=`expr $today - $snaps` yesterday=`expr $today - 1` if [ $yesterday -lt 1 ] ; then yesterday=365 fi if [

Re: [zfs-discuss] What is likely the best way to accomplish this task?

2008-03-04 Thread Rob Logan
have 4x500G disks in a RAIDZ. I'd like to repurpose [...] as the second half of a mirror in a machine going into colo. rsync or zfs send -R the 128G to the machine going to the colo if you need more space in colo, remove one disk faulting sys1 and add (stripe) it on colo (note: you will

Re: [zfs-discuss] Which DTrace provider to use

2008-02-13 Thread Rob Logan
Way crude, but effective enough: kinda cool, but isn't thats what sar -f /var/adm/sa/sa`date +%d` -A | grep -v , is for? crontab -e sys to start.. for more fun acctadm -e extended -f /var/adm/exacct/proc process Rob ___ zfs-discuss

Re: [zfs-discuss] hardware for zfs home storage

2008-01-14 Thread Rob Logan
appears to have unlimited backups for 4.95 a month. http://rsync.net/ $1.60 per month per G (no experience) to keep this more ontopic and not spam like. what about [home] backups??.. what's the best deal for you: 1) a 4+1 (space) or 2*(2+1) (speed) 64bit 4G+ zfs nas (data for old

Re: [zfs-discuss] Panic on Zpool Import (Urgent)

2008-01-13 Thread Rob Logan
as its been pointed out it likely 6458218 but a zdb -e poolname will tell you alittle more Rob ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] What does dataset is busy actually mean? [creating snap]

2008-01-12 Thread Rob Logan
what causes a dataset to get into this state? while I'm not exactly sure, I do have the steps leading up to when I saw it trying to create a snapshot. ie: 10 % zfs snapshot z/b80nd/[EMAIL PROTECTED] cannot create snapshot 'z/b80nd/[EMAIL PROTECTED]': dataset is busy 13 % mount -F zfs

[zfs-discuss] NCQ

2008-01-09 Thread Rob Logan
fun example that shows NCQ lowers wait and %w, but doesn't have much impact on final speed. [scrubbing, devs reordered for clarity] extended device statistics devicer/sw/s kr/skw/s wait actv svc_t %w %b sd2 454.70.0 47168.00.0 0.0 5.7

Re: [zfs-discuss] zfs panic on boot

2008-01-03 Thread Rob Logan
space_map_add+0xdb(ff014c1a21b8, 472785000, 1000) space_map_load+0x1fc(ff014c1a21b8, fbd52568, 1, ff014c1a1e88, ff0149c88c30) running snv79. hmm.. did you spend any time in snv_74 or snv_75 that might have gotten

Re: [zfs-discuss] Question - does a snapshot of root include child

2007-12-20 Thread Rob Logan
I've only started using ZFS this week, and hadn't even touched a Unix welcome to ZFS... here is a simple script you can start with: #!/bin/sh snaps=15 today=`date +%j` nuke=`expr $today - $snaps` yesterday=`expr $today - 1` if [ $yesterday -lt 0 ] ; then yesterday=365 fi if [ $nuke -lt 0

Re: [zfs-discuss] Fwd: zfs boot suddenly not working

2007-12-18 Thread Rob Logan
bootfs rootpool/rootfs does grep zfs /mnt/etc/vfstab look like: rootpool/rootfs- / zfs - no - (bet it doesn't... edit like above and reboot) or second guess (well, third :-) is your theory that can be checked with: zpool import rootpool zpool import

Re: [zfs-discuss] Fwd: zfs boot suddenly not working

2007-12-18 Thread Rob Logan
I guess the zpool.cache in the bootimage got corrupted? not on zfs :-) perhaps a path to a drive changed? Rob ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] JBOD performance

2007-12-17 Thread Rob Logan
r/sw/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 0.0 48.00.0 3424.6 0.0 35.00.0 728.9 0 100 c2t8d0 That service time is just terrible! yea, that service time is unreasonable. almost a second for each command? and 35 more commands queued? (reorder =

[zfs-discuss] install notes for zfs root.

2007-11-29 Thread Rob Logan
After a fresh SMI labeled c0t0d0s0 / swap /export/home jumpstart in /etc check hostname.e1000g0 defaultrouter netmasks resolv.conf nsswitch.conf services hosts coreadm.conf acctadm.conf dumpadm.conf named.conf rsync.conf svcadm disable fc-cache cde-login cde-calendar-manager

Re: [zfs-discuss] Home Motherboard

2007-11-22 Thread Rob Logan
here is a simple layout for 6 disks toward speed : /dev/dsk/c0t0d0s1 - - swap- no - /dev/dsk/c0t1d0s1 - - swap- no - root/snv_77 - / zfs - no - z/snv_77/usr - /usr zfs - yes - z/snv_77/var -

Re: [zfs-discuss] Home Motherboard

2007-11-22 Thread Rob Logan
with 4 cores and 2-4G of ram. not sure 2G is enough... at least with 64bit there are no kernel space issues. 6 % echo '::memstat' | mdb -k Page SummaryPagesMB %Tot Kernel 692075

[zfs-discuss] Home Motherboard

2007-11-21 Thread Rob Logan
grew tired of the recycled 32bit cpus in http://www.opensolaris.org/jive/thread.jspa?messageID=127555 and bought this to put the two marvell88sx cards in: $255 http://www.supermicro.com/products/motherboard/Xeon3000/3210/X7SBE.cfm

Re: [zfs-discuss] which would be faster

2007-11-20 Thread Rob Logan
On the other hand, the pool of 3 disks is obviously going to be much slower than the pool of 5 while today that's true, someday io will be balanced by the latency of vdevs rather than the number... plus two vdevs are always going to be faster than one vdev, even if one is slower than the

Re: [zfs-discuss] Backport of vfs_zfsacl.c to samba 3.0.26a, [and NexentaStor]

2007-11-02 Thread Rob Logan
I'm confused by this and NexentaStor... wouldn't it be better to use b77? with: Heads Up: File system framework changes (supplement to CIFS' head's up) Heads Up: Flag Day (Addendum) (CIFS Service) Heads Up: Flag Day (CIFS Service) caller_context_t in all VOPs - PSARC/2007/218 VFS Feature

Re: [zfs-discuss] zfs: allocating allocated segment (offset=

2007-10-12 Thread Rob Logan
I suspect that the bad ram module might have been the root cause for that freeing free segment zfs panic, perhaps I removed two 2G simms but left the two 512M simms, also removed kernelbase but the zpool import still crashed the machine. its also registered ECC ram, memtest86 v1.7

Re: [zfs-discuss] ZFS Mountroot and Bootroot Comparison

2007-10-05 Thread Rob Logan
I'm not surprised that having /usr in a separate pool failed. while this is discouraging, (I have several b62 machines with root mirrored and /usr on raidz) if booting from raidz is a pri, and comes soon, at least I'd be happy :-) Rob

Re: [zfs-discuss] 8+2 or 8+1+spare?

2007-07-09 Thread Rob Logan
which is better 8+2 or 8+1+spare? 8+2 is safer for the same speed 8+2 requires alittle more math, so its slower in theory. (unlikely seen) (4+1)*2 is 2x faster, and in theory is less likely to have wasted space in transaction group (unlikely seen) (4+1)*2 is cheaper to upgrade in

Re: [zfs-discuss] ZFS on 32-bit...

2007-06-30 Thread Rob Logan
How does eeprom(1M) work on the Xeon that the OP said he has? its faked via /boot/solaris/bootenv.rc built into /platform/i86pc/$ISADIR/boot_archive ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] ZFS on 32-bit...

2007-06-29 Thread Rob Logan
issues does ZFS have with running in only 32-bit mode? with less then 2G ram, no worry... with more then 3G ram and you don't need mem in userspace, give it to the kernel in virtual memory for zfs cache by moving the kernelbase... eeprom kernelbase=0x8000 or for only 1G userland: eeprom

Re: [zfs-discuss] Suggestions on 30 drive configuration?

2007-06-26 Thread Rob Logan
an array of 30 drives in a RaidZ2 configuration with two hot spares I don't want to mirror 15 drives to 15 drives ok, so space over speed... and are willing to toss somewhere between 4 and 15 drives for protection. raidz splits the (up to 128k) write/read recordsize into each element of the

[zfs-discuss] Re: marvell88sx error in command 0x2f: status 0x51

2007-06-21 Thread Rob Logan
[hourly] marvell88sx error in command 0x2f: status 0x51 ah, its some kinda SMART or FMA query that model WDC WD3200JD-00KLB0 firmware 08.05J08 serial number WD-WCAMR2427571 supported features: 48-bit LBA, DMA, SMART, SMART self-test SATA1 compatible capacity = 625142448 sectors drives

[zfs-discuss] marvell88sx error in command 0x2f: status 0x51

2007-06-19 Thread Rob Logan
with no seen effects `dmesg` reports lots of kern.warning] WARNING: marvell88sx1: port 3: error in command 0x2f: status 0x51 found in snv_62 and opensol-b66 perhaps http://bugs.opensolaris.org/view_bug.do?bug_id=6539787 can someone post part of the headers even if the code is closed?

Re: [zfs-discuss] Re: ZFS Apple WWDC Keynote Absence

2007-06-12 Thread Rob Logan
we know time machine requires an extra disk (local or remote) so its reasonable to guess the non bootable time machine disk could use zfs. someone with a Leopard dvd (Rick Mann) could answer this... ___ zfs-discuss mailing list

[zfs-discuss] Holding disks for home servers

2007-06-07 Thread Rob Logan
On the third upgrade of the home nas, I chose http://www.addonics.com/products/raid_system/ae4rcs35nsa.asp to hold the disks. each hold 5 disks, in the space of three slots and 4 fit into a http://www.google.com/search?q=stacker+810 case for a total of 20 disks. But if given a chance to go back

Re: [zfs-discuss] Re: Deterioration with zfs performance and recent zfs bits?

2007-06-01 Thread Rob Logan
Patching zfs_prefetch_disable = 1 has helped It's my belief this mainly aids scanning metadata. my testing with rsync and yours with find (and seen with du ; zpool iostat -v 1 ) pans this out.. mainly tracked in bug 6437054 vdev_cache: wise up or die

Re: [zfs-discuss] Re: zfs boot image conversion kit is posted

2007-05-01 Thread Rob Logan
sits there for a second, then boot loops and comes back to the grub menu. I noticed this too when I was playing... using kernel$ /platform/i86pc/kernel/$ISADIR/unix -v -B $ZFS-BOOTFS I could see vmunix loading, but it quickly NMIed around the rootnex: [ID 349649 kern.notice] isa0 at root

[zfs-discuss] rootpool notes

2007-04-24 Thread Rob Logan
updating my notes with Lori's rootpool notes found in http://www.opensolaris.org/os/community/zfs/boot/zfsboot-manual/ using the Solaris Express: Community Release DVD (no asserts like bfu code) from http://www.opensolaris.org/os/downloads/on/ and installing the Solaris Express (second option,

Re: [zfs-discuss] update on zfs boot support

2007-03-17 Thread Rob Logan
I'm sure its not blessed, but another process to maximize the zfs space on a system with few disks is 1) boot from SXCR http://www.opensolaris.org/os/downloads/on/ 2) select min install with 512M / 512M swap rest /export/home use format to copy the partition table from disk0 to disk1 umount

Re: [zfs-discuss] Re: Re: How much do we really want zpool remove?

2007-02-27 Thread Rob Logan
With modern journalling filesystems, I've never had to fsck anything or run a filesystem repair. Ever. On any of my SAN stuff. you will.. even if the SAN is perfect, you will hit bugs in the filesystem code.. from lots of rsync hard links or like this one from raidtools last week: Feb 9

[zfs-discuss] page rates

2007-02-26 Thread Rob Logan
This is a lightly loaded v20z but it has zfs across its two disks.. its hung (requiring a power cycle) twice since running 5.11 opensol-20060904 the last time I had a `vmstat 1` running... nice page rates right before death :-) kthr memorypagedisk faults

Re: [zfs-discuss] ZFS and HDLM 5.8 ... does that coexist well ? [MD21]

2007-01-23 Thread Rob Logan
FWIW, the Micropolis 1355 is a 141 MByte (!) ESDI disk. The MD21 is an ESDI to SCSI converter. yup... its the board in the middle left of http://rob.com/sun/sun2/md21.jpg Rob ___ zfs-discuss mailing list

[zfs-discuss] zpool import core

2006-11-22 Thread Rob Logan
did a `zpool export zfs ; zpool import zfs` and got a core. core file = core.import -- program ``/sbin/zpool'' on platform i86pc SIGSEGV: Segmentation Fault $c libzfs.so.1`zfs_prop_get+0x24(0, d, 80433f0, 400, 0, 0) libzfs.so.1`dataset_compare+0x39(80d5fd0, 80d5fe0)

Re: [zfs-discuss] unaccounted for daily growth in ZFS disk space usage

2006-08-26 Thread Rob Logan
For various reasons, I can't post the zfs list type here is one, and it seems inline with expected netapp(tm) type usage considering the cluster size differences. 14 % cat snap_sched #!/bin/sh snaps=15 for fs in `echo Videos Movies Music users local` do i=$snaps zfs destroy zfs/[EMAIL

Re: [zfs-discuss] Expanding raidz2 [Infrant]

2006-07-13 Thread Rob Logan
Infrant NAS box and using their X-RAID instead. I've gone back to solaris from an Infrant box. 1) while the Infrant cpu is sparc, its way, way, slow. a) the web IU takes 3-5 seconds per page b) any local process, rsync, UPnP, SlimServer is cpu starved 2) like a netapp,

Re: [zfs-discuss] Re: Expanding raidz2

2006-07-13 Thread Rob Logan
comfortable with having 2 parity drives for 12 disks, the thread starting config of 4 disks per controller(?): zpool create tank raidz2 c1t1d0 c1t2d0 c1t3d0 c1t4d0c2t1d0 c2t2d0 then later zpool add tank raidz2 c2t3d0 c2t4d0 c3t1d0 c3t2d0 c3t3d0 c3t4d0 as described, doubles ones

Re: [zfs-discuss] Re: Thumper on (next) Tuesday?

2006-07-11 Thread Rob Logan
Well, glue a beard on me and call me Nostradamus : http://www.sun.com/servers/x64/x4500/arch-wp.pdf http://www.cooldrives.com/8-channel-8-port-sata-pci-card.html ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] Re: x86 CPU Choice for ZFS

2006-07-06 Thread Rob Logan
with ZFS the primary driver isn't cpu, its how many drives can one attach :-) I use a 8 sata and 2 pata port http://supermicro.com/Aplus/motherboard/Opteron/nForce/H8DCE.cfm But there was a v20z I could steal registered ram and cpus from. H8DCE can't use the SATA HBA Framework which only

[zfs-discuss] Re: opensol-20060605 # zpool iostat -v 1

2006-06-11 Thread Rob Logan
a total of 4*64k = 256k to fetch a 2k block. http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6437054 perhaps a quick win would be to tell vdev_cache about the DMU_OT_* type so it can read ahead appropriately. it seems the largest losses are metadata. (du,find,scrub/resilver)

Re: [zfs-discuss] ata panic [fixed]

2006-05-29 Thread Rob Logan
, Rob Logan wrote: `mv`ing files from a zfs dir to another zfs filesystem in the same pool will panic a 8 sata zraid http://supermicro.com/Aplus/motherboard/Opteron/nForce/H8DCE.cfm system with ::status debugging crash dump vmcore.3 (64-bit) from zfs operating system: 5.11 opensol

[zfs-discuss] ata panic

2006-05-26 Thread Rob Logan
`mv`ing files from a zfs dir to another zfs filesystem in the same pool will panic a 8 sata zraid http://supermicro.com/Aplus/motherboard/Opteron/nForce/H8DCE.cfm system with ::status debugging crash dump vmcore.3 (64-bit) from zfs operating system: 5.11 opensol-20060523 (i86pc) panic message: