Re: [zfs-discuss] latest zpool version in solaris 11 express

2011-07-20 Thread Rob Logan


plus virtualbox 4.1 with network in a box would like snv_159

from http://www.virtualbox.org/wiki/Changelog

Solaris hosts: New Crossbow based bridged networking driver for Solaris 11 
build 159 and above

Rob

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] WarpDrive SLP-300

2010-11-17 Thread Rob Logan

 BTW, any new storage-controller-related drivers introduced in snv151a?

the 64bit driver in 147
-rwxr-xr-x   1 root sys   401200 Sep 14 08:44 mpt
-rwxr-xr-x   1 root sys   398144 Sep 14 09:23 mpt_sas
is a different size than 151a
-rwxr-xr-x   1 root sys   400936 Nov 15 23:05 /kernel/drv/amd64/mpt
-rwxr-xr-x   1 root sys   399952 Nov 15 23:06 /kernel/drv/amd64/mpt_sas

and mpt_sas has a new printf:
reset was running, this event can not be handled this time

Rob

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] sharing a ssd between rpool and l2arc

2010-03-30 Thread Rob Logan


 you can't use anything but a block device for the L2ARC device.

sure you can... 
http://mail.opensolaris.org/pipermail/zfs-discuss/2010-March/039228.html
it even lives through a reboot (rpool is mounted before other pools)

zpool create -f test c9t3d0s0 c9t4d0s0
zfs create -V 3G rpool/cache
zpool add test cache /dev/zvol/dsk/rpool/cache
reboot

if your asking for a L2ARC on rpool, well, yea, its not mounted soon enough, 
but the
point is to put rpool, swap, and L2ARC for your storage pool all on a single
SSD..

Rob

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Sun Flash Accelerator F20 numbers

2010-03-30 Thread Rob Logan

 if you disable the ZIL altogether, and you have a power interruption, failed 
 cpu, 
 or kernel halt, then you're likely to have a corrupt unusable zpool

the pool will always be fine, no matter what.

 or at least data corruption. 

yea, its a good bet that data sent to your file or zvol will not be there
when the box comes back, even though your program had finished seconds 
before the crash.

Rob

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] SSD As ARC

2010-03-28 Thread Rob Logan
 Can't you slice the SSD in two, and then give each slice to the two zpools?
 This is exactly what I do ... use 15-20 GB for root and the rest for an L2ARC.

I like the idea of swapping on SSD too, but why not make a zvol for the L2ARC
so your not limited by the hard partitioning?

Rob

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] SSD As ARC

2010-03-28 Thread Rob Logan

 I like the idea of swapping on SSD too, but why not make a zvol for the L2ARC
 so your not limited by the hard partitioning?

it lives through a reboot.. 

zpool create -f test c9t3d0s0 c9t4d0s0
zfs create -V 3G rpool/cache
zpool add test cache /dev/zvol/dsk/rpool/cache
reboot 
zpool status
  pool: rpool
 state: ONLINE
 scrub: none requested
config:

NAME  STATE READ WRITE CKSUM
rpool ONLINE   0 0 0
  mirror  ONLINE   0 0 0
c9t1d0s0  ONLINE   0 0 0
c9t2d0s0  ONLINE   0 0 0

errors: No known data errors

  pool: test
 state: ONLINE
 scrub: none requested
config:

NAME STATE READ WRITE CKSUM
test ONLINE   0 0 0
  c9t3d0s0   ONLINE   0 0 0
  c9t4d0s0   ONLINE   0 0 0
cache
  /dev/zvol/dsk/rpool/cache  ONLINE   0 0 0

errors: No known data errors

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Poor ZIL SLC SSD performance

2010-02-19 Thread Rob Logan

 An UPS plus disabling zil, or disabling synchronization, could possibly
 achieve the same result (or maybe better) iops wise.
Even with the fastest slog, disabling zil will always be faster... 
(less bytes to move)

 This would probably work given that your computer never crashes
 in an uncontrolled manner. If it does, some data may be lost
 (and possibly the entire pool lost, if you are unlucky).
the pool would never be at risk, but when your server
reboots, its clients will be confused that things
it sent, and the server promised it had saved, are gone.
For some clients, this small loss might be the loss of their 
entire dataset.

Rob

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Reading ZFS config for an extended period

2010-02-15 Thread Rob Logan

  RFE open to allow you to store [DDT] on a separate top level VDEV

hmm, add to this spare, log and cache vdevs, its to the point of making
another pool and thinly provisioning volumes to maintain partitioning  
flexibility.

taemun: hay, thanks for closing the loop!

Rob
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Cores vs. Speed?

2010-02-06 Thread Rob Logan
 I like the original Phenom X3 or X4 

we all agree ram is the key to happiness. The debate is what offers the most ECC
ram for the least $. I failed to realize the AM3 cpus accepted UnBuffered ECC 
DDR3-1333
like Lynnfield. To use Intel's 6 slots vs AMD 4 slots, one must use Registered 
ECC.
So the low cost mission is something like

AMD Phenom II X4 955 Black Edition Deneb 3.2GHz Socket AM3 125W 
$150 http://www.newegg.com/Product/Product.aspx?Item=N82E16819103808  
$ 85 http://www.newegg.com/Product/Product.aspx?Item=N82E16813131609  
$ 60 http://www.newegg.com/Product/Product.aspx?Item=N82E16820139050

But we are still stuck at 8G without going to expensive ram or
a more expensive CPU.

Rob
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Cores vs. Speed?

2010-02-05 Thread Rob Logan


 if zfs overlaps mirror reads across devices.

it does... I have one very old disk in this mirror and
when I attach another element one can see more reads going
to the faster disks... this past isn't right after the attach
but since the reboot, but one can still see the reads are
load balanced depending on the response of elements
in the vdev.

13 % zpool iostat -v
   capacity operationsbandwidth
pool used  avail   read  write   read  write
--  -  -  -  -  -  -
rpool   7.01G   142G  0  0  1.60K  1.44K
  mirror7.01G   142G  0  0  1.60K  1.44K
c9t1d0s0  -  -  0  0674  1.46K
c9t2d0s0  -  -  0  0687  1.46K
c9t3d0s0  -  -  0  0720  1.46K
c9t4d0s0  -  -  0  0750  1.46K


but I also support your conclusions.

Rob

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Cores vs. Speed?

2010-02-05 Thread Rob Logan

 Intel's RAM is faster because it needs to be.
I'm confused how AMD's dual channel, two way interleaved 
128-bit DDR2-667 into an on-cpu controller is faster than
Intel's Lynnfield dual channel, Rank and Channel interleaved 
DDR3-1333 into an on-cpu controller. 
http://www.anandtech.com/printarticle.aspx?i=3634

 With the AMD CPU, the memory will run cooler and be cheaper. 
cooler yes, but only $2 more per gig for 2x bandwidth?

http://www.newegg.com/Product/Product.aspx?Item=N82E16820139050
http://www.newegg.com/Product/Product.aspx?Item=N82E16820134652

and if one uses all 16 slots, that 667Mhz simm runs at 533Mhz
with AMD. The same is true for Lynnfield if one uses Registered
DDR3, one only gets 800Mhz with all 6 slots. (single or dual rank)

 Regardless, for zfs, memory is more important than raw CPU 
agreed! but everything must be balanced.

Rob
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Cores vs. Speed?

2010-02-04 Thread Rob Logan

  I am leaning towards AMD because of ECC support 

well, lets look at Intel's offerings... Ram is faster than AMD's
at 1333Mhz DDR3 and one gets ECC and thermal sensor for $10 over non-ECC 
http://www.newegg.com/Product/Product.aspx?Item=N82E16820139040

This MB has two Intel ethernets and for an extra $30 an ether KVM (LOM)
http://www.newegg.com/Product/Product.aspx?Item=N82E16813182212

One needs a Xeon 34xx for ECC, the 45W versions isn't on newegg, and ignoring
the one without Hyper-Threading leaves us 
http://www.newegg.com/Product/Product.aspx?Item=N82E16819117225

Yea @ 95W it isn't exactly low power, but 4 cores @ 2533MHz and another
4 Hyper-Thread cores is nice.. If you only need one core, the marketing
paperwork claims it will push to 2.93GHz too. But the ram bandwidth is the 
big win for Intel. 

Avoid the temptation, but @ 2.8Ghz without ECC, this close $$
http://www.newegg.com/Product/Product.aspx?Item=N82E16819115214

Now, this gets one to 8G ECC easily...AMD's unfair advantage is all those
ram slots on their multi-die MBs... A slow AMD cpu with 64G ram
might be better depending on your working set / dedup requirements.

Rob



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] verging OT: how to buy J4500 w/o overpriced drives

2010-02-02 Thread Rob Logan

 true. but I buy a Ferrari for the engine and bodywork and chassis
 engineering. It is totally criminal what Sun/EMC/Dell/Netapp do charging

its interesting to read this with another thread containing:

 timeout issue is definitely the WD10EARS disks.
 replaced 24 of them with ST32000542AS (f/w CC34), and the problem departed 
with the WD disks.

everyone needs to eat, if Ferrari spreads their NRE over
the wheels, it might be because they are light and have
been tested to not melt from the heat. Sun/EMC/Dell/Netapp
tests each of their components and sells the total car.

I'm thankful Sun shares their research and we can build on it.
(btw, netapp ontap 8 is freebsd, and runs on std hardware
after alittle bios work :-)

Rob
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best 1.5TB drives for consumer RAID?

2010-01-24 Thread Rob Logan

 a 1U or 2U JBOD chassis for 2.5 drives,
from http://supermicro.com/products/nfo/chassis_storage.cfm 
the E1 (single) or E2 (dual) options have a SAS expander so
http://supermicro.com/products/chassis/2U/?chs=216
fits your build or build it your self with
http://supermicro.com/products/accessories/mobilerack/CSE-M28E2.cfm


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] 4 Internal Disk Configuration

2010-01-14 Thread Rob Logan


 By partitioning the first two drives, you can arrange to have a small
 zfs-boot mirrored pool on the first two drives, and then create a second
 pool as two mirror pairs, or four drives in a raidz to support your data.

agreed..

2 % zpool iostat -v
 capacity operationsbandwidth
pool   used  avail   read  write   read  write
  -  -  -  -  -  -
r 8.34G  21.9G  0  5  1.62K  17.0K
  mirror  8.34G  21.9G  0  5  1.62K  17.0K
c5t0d0s0  -  -  0  2  3.30K  17.2K
c5t1d0s0  -  -  0  2  3.66K  17.2K
  -  -  -  -  -  -
z  375G   355G  6 32  67.2K   202K
  mirror   133G   133G  2 14  24.7K  84.2K
c5t0d0s7  -  -  0  3  53.3K  84.3K
c5t1d0s7  -  -  0  3  53.2K  84.3K
  mirror   120G   112G  1  9  21.3K  59.6K
c5t2d0-  -  0  2  38.4K  59.7K
c5t3d0-  -  0  2  38.2K  59.7K
  mirror   123G   109G  1  8  21.3K  58.6K
c5t4d0-  -  0  2  36.4K  58.7K
c5t5d0-  -  0  2  37.2K  58.7K
  -  -  -  -  -  -

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] unable to zfs destroy

2010-01-08 Thread Rob Logan

this one has me alittle confused. ideas?

j...@opensolaris:~# zpool import z
cannot mount 'z/nukeme': mountpoint or dataset is busy
cannot share 'z/cle2003-1': smb add share failed
j...@opensolaris:~# zfs destroy z/nukeme
internal error: Bad exchange descriptor
Abort (core dumped)
j...@opensolaris:~# adb core
core file = core -- program ``/sbin/zfs'' on platform i86pc
SIGABRT: Abort
$c
libc_hwcap1.so.1`_lwp_kill+0x15(1, 6, 80462a8, fee9bb5e)
libc_hwcap1.so.1`raise+0x22(6, 0, 80462f8, fee7255a)
libc_hwcap1.so.1`abort+0xf2(8046328, fedd, 8046328, 8086570, 8086970, 400)
libzfs.so.1`zfs_verror+0xd5(8086548, 813, fedc5178, 804635c)
libzfs.so.1`zfs_standard_error_fmt+0x225(8086548, 32, fedc5178, 808acd0)
libzfs.so.1`zfs_destroy+0x10e(808acc8, 0, 0, 80479c8)
destroy_callback+0x69(808acc8, 8047910, 80555ec, 8047910)
zfs_do_destroy+0x31f(2, 80479c8, 80479c4, 80718dc)
main+0x26a(3, 80479c4, 80479d4, 8053fdf)
_start+0x7d(3, 8047ae4, 8047ae8, 8047af0, 0, 8047af9)
^d
j...@opensolaris:~# uname -a
SunOS opensolaris 5.11 snv_130 i86pc i386 i86pc
j...@opensolaris:~# zpool status -v z
  pool: z
 state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
entire pool from backup.
   see: http://www.sun.com/msg/ZFS-8000-8A
 scrub: scrub in progress for 0h39m, 19.15% done, 2h46m to go
config:

NAMESTATE READ WRITE CKSUM
z   ONLINE   0 0 2
  c3t0d0s7  ONLINE   0 0 4
  c3t1d0s7  ONLINE   0 0 0
  c2d0  ONLINE   0 0 4

errors: Permanent errors have been detected in the following files:

z/nukeme:0x0

j...@opensolaris:~# zfs list z/nukeme
NAME   USED  AVAIL  REFER  MOUNTPOINT
z/nukeme  49.0G   496G  49.0G  /z/nukeme
j...@opensolaris:~# zdb -d z/nukeme 0x0
zdb: can't open 'z/nukeme': Device busy

there is also no mount point /z/nukeme

any ideas how to nuke /z/nukeme?


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Separate Zil on HDD ?

2009-12-02 Thread Rob Logan


 2 x 500GB mirrored root pool
 6 x 1TB raidz2 data pool
 I happen to have 2 x 250GB Western Digital RE3 7200rpm
 be better than having the ZIL 'inside' the zpool.

listing two log devices (stripe) would have more spindles
than your single raidz2 vdev..  but for low cost fun one
might make a tinny slice on all the disks of the raidz2
and list six log devices (6 way stripe) and not bother
adding the other two disks.

Rob


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Workaround for mpt timeouts in snv_127

2009-11-30 Thread Rob Logan

 Chenbro 16 hotswap bay case.  It has 4 mini backplanes that each connect via 
 an SFF-8087 cable
 StarTech HSB430SATBK 

hmm, both are passive backplanes with one SATA tunnel per link... 
no SAS Expanders (LSISASx36) like those found in SuperMicro or J4x00 with 4 
links per connection. 
wonder if there is a LSI issue with too many links in HBA mode?

Rob

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] scrub differs in execute time?

2009-11-14 Thread Rob Logan

 P45 Gigabyte EP45-DS3P. I put the AOC card into a PCI slot

I'm not sure how many half your disks are or how your vdevs 
are configured, but the ICH10 has 6 sata ports at 300MB and 
one PCI port at 266MB (that's also shared with the IT8213 IDE chip) 

so in an ideal world your scrub bandwidth would be 

300*6 MB with 6 disks on ICH10, in a strip
300*1 MB with 6 disks on ICH10, in a raidz
300*3+(266/3) MB with 3 disks on ICH10, and 3 on shared PCI in a strip
266/3 MB with 3 disks on ICH10, and 3 on shared PCI in a raidz
266/6 MB with 6 disks on shared PCI in a stripe
266/6 MB with 6 disks on shared PCI in a raidz

we know disk don't go that fast anyway, but going from a 8h to 15h 
scrub is very reasonable depending on vdev config.

Rob

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] scrub differs in execute time?

2009-11-14 Thread Rob Logan

 The ICH10 has a 32-bit/33MHz PCI bus which provides 133MB/s at half duplex.

you are correct, I thought ICH10 used a 66Mhz bus, when infact its 33Mhz. The
AOC card works fine in a PCI-X 64Bit/133Mhz slot good for 1,067 MB/s 
even if the motherboard uses a PXH chip via 8 lane PCIE.

Rob

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] raidz-1 vs mirror

2009-11-11 Thread Rob Logan

 from a two disk (10krpm) mirror layout to a three disk raidz-1. 

wrights will be unnoticeably slower for raidz1 because of parity calculation
and latency of a third spindle. but reads will be 1/2 the speed
of the mirror because it can split the reads between two disks.

another way to say the same thing:

a raidz will be the speed of the slowest disk in the array, while a
mirror will be x(Number of mirrors)  time faster for reads or
the the speed of the slowest disk for wrights.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] PSARC recover files?

2009-11-09 Thread Rob Logan


frequent snapshots offer outstanding oops protection.

Rob
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] PSARC recover files?

2009-11-09 Thread Rob Logan


 Maybe to create snapshots after the fact

how does one quiesce a drive after the fact?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] sub-optimal ZFS performance

2009-10-29 Thread Rob Logan


 So the solution is to never get more than 90% full disk space

while that's true, its not Henrik's main discovery. Henrik points
out that 1/4 of the arc is used for metadata, and sometime
that's not enough..

if
echo ::arc | mdb -k | egrep ^size
isn't reaching
echo ::arc | mdb -k | egrep ^c 
and you are maxing out your metadata space, check:
echo ::arc | mdb -k | grep meta_

one can set the metadata space (1G in this case) with:
echo arc_meta_limit/Z 0x400 | mdb -kw

So while Henrik's FS had some fragmentation, 1/4 of c_max wasn't
enough metadata arc space for number of files in /var/pkg/download

good find Henrik!

Rob
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs code and fishworks fork

2009-10-27 Thread Rob Logan


 are you going to ask NetApp to support ONTAP on Dell systems,

well, ONTAP 5.0 is built on freebsd, so it wouldn't be too
hard to boot on dell hardware. Hay, at least it can do
aggregates larger than 16T now...
http://www.netapp.com/us/library/technical-reports/tr-3786.html

Rob
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZPOOL Metadata / Data Error - Help

2009-10-04 Thread Rob Logan


Action: Restore the file in question if possible. Otherwise restore  
the

 entire pool from backup.
 metadata:0x0
 metadata:0x15


bet its in a snapshot that looks to have been destroyed already. try

zpool clear POOL01
zpool scrub POOL01


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] bigger zfs arc

2009-10-02 Thread Rob Logan

 zfs will use as much memory as is necessary but how is necessary 
calculated?

using arc_summary.pl from http://www.cuddletech.com/blog/pivot/entry.php?id=979
my tiny system shows:
 Current Size: 4206 MB (arcsize)
 Target Size (Adaptive):   4207 MB (c)
 Min Size (Hard Limit):894 MB (zfs_arc_min)
 Max Size (Hard Limit):7158 MB (zfs_arc_max)

so arcsize is close to the desired c, no pressure here but it would be nice to 
know
how c is calculated as its much smaller than zfs_arc_max on a system
like yours with nothing else on it.

 When an L2ARC is attached does it get used if there is no memory pressure?

My guess is no. for the same reason an L2ARC takes so long to fill.
arc_summary.pl from the same system is

  Most Recently Used Ghost:0%  9367837 (mru_ghost)  [ Return Customer 
Evicted, Now Back ]
  Most Frequently Used Ghost:  0% 11138758 (mfu_ghost)  [ Frequent Customer 
Evicted, Now Back ]

so with no ghosts, this system wouldn't benefit from an L2ARC even if added

In review:  (audit welcome)

if arcsize = c and is much less than zfs_arc_max,
  there is no point in adding system ram in hopes of increase arc.

if m?u_ghost is a small %, there is no point in adding an L2ARC.

if you do add a L2ARC, one must have ram between c and zfs_arc_max for its 
pointers.

Rob
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] The importance of ECC RAM for ZFS

2009-07-24 Thread Rob Logan

 The post I read said OpenSolaris guest crashed, and the guy clicked
 the ``power off guest'' button on the virtual machine.

I seem to recall guest hung. 99% of solaris hangs (without
a crash dump) are hardware in nature. (my experience backed by
an uptime of 1116days) so the finger is still
pointed at VirtualBox's hardware implementation.

as for ZFS requiring better hardware, you could turn
off checksums and other protections so one isn't notified
of issues making it act like the others.

Rob
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Another user looses his pool (10TB) in this case and 40 days work

2009-07-20 Thread Rob Logan

  the machine hung and I had to power it off.

kinda getting off the zpool import --tgx -3 request, but
hangs are exceptionally rare and usually ram or other
hardware issue, solairs usually abends on software faults.

r...@pdm #  uptime
  9:33am  up 1116 day(s), 21:12,  1 user,  load average: 0.07, 0.05, 0.05
r...@pdm #  date
Mon Jul 20 09:33:07 EDT 2009
r...@pdm #  uname -a
SunOS pdm 5.9 Generic_112233-12 sun4u sparc SUNW,Ultra-250

Rob

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Understanding SAS/SATA Backplanes and Connectivity

2009-07-16 Thread Rob Logan

 c4 scsi-bus connectedconfigured   unknown
 c4::dsk/c4t15d0disk connectedconfigured   unknown
 :
 c4::dsk/c4t33d0disk connectedconfigured   unknown
 c4::es/ses0ESI  connectedconfigured   unknown

thanks! so SATA disks show up JBOD in IT mode.. Is there some magic that
load balances the 4 SAS ports as this shows up as one scsi-bus?

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS write I/O stalls

2009-06-30 Thread Rob Logan

 CPU is smoothed out quite a lot
yes, but the area under the CPU graph is less, so the
rate of real work performed is less, so the entire
job took longer. (allbeit smoother)

Rob
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS and Dinamic Stripe

2009-06-29 Thread Rob Logan

 try to be spread across different vdevs.

% zpool iostat -v
   capacity operationsbandwidth
pool used  avail   read  write   read  write
--  -  -  -  -  -  -
z686G   434G 40  5  2.46M   271K
  c1t0d0s7   250G   194G 14  1   877K  94.2K
  c1t1d0s7   244G   200G 15  2   948K  96.5K
  c0d0   193G  39.1G 10  1   689K  80.2K


note that c0d0 is basically full, but still serving 10
of every 15 reads, and 82% of the writes.

Rob
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] problems with l2arc in 2009.06

2009-06-18 Thread Rob Logan

 correct ratio of arc to l2arc?

from http://blogs.sun.com/brendan/entry/l2arc_screenshots

It costs some DRAM to reference the L2ARC, at a rate proportional to record 
size.
For example, it currently takes about 15 Gbytes of DRAM to reference 600 Gbytes 
of
L2ARC - at an 8 Kbyte ZFS record size. If you use a 16 Kbyte record size, that 
cost
would be halve - 7.5 Gbytes. This means you shouldn't, for example, configure a
system with only 8 Gbytes of DRAM, 600 Gbytes of L2ARC, and an 8 Kbyte record 
size -
if you did, the L2ARC would never fully populate.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Replacing HDD with larger HDD..

2009-05-22 Thread Rob Logan

 zpool offline grow /var/tmp/disk01
 zpool replace grow /var/tmp/disk01 /var/tmp/bigger_disk01

one doesn't need to offline before the replace, so as long as you
have one free disk interface one can cfgadm -c configure sata0/6
each disk as you go... or you can offline and cfgadm each
disk in the same port too as you go.

 It is still the same size. I would expect it to go to 9G.

a reboot or export/import would have fixed this.

 cannot import 'grow': no such pool available

you meant to type
zpool import -d /var/tmp grow

Rob
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] RAIDZ2: only half the read speed?

2009-05-22 Thread Rob Logan

 How does one look at the disk traffic?

iostat -xce 1

 OpenSolaris, raidz2 across 8 7200 RPM SATA disks:
 17179869184 bytes (17 GB) copied, 127.308 s, 135 MB/s

 OpenSolaris, flat pool across the same 8 disks:
 17179869184 bytes (17 GB) copied, 61.328 s, 280 MB/s

one raidz2 set of 8 disks can't be faster than the slowest
disk in the set as its one vdev... I would have expected
the 8 vdev set to be 8x faster than the single raidz[12]
set, but like Richard said, there is another bottle
neck in there that iostat will show.

Rob
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] SAS 15K drives as L2ARC

2009-05-05 Thread Rob Logan


  use a bunch of 15K SAS drives as L2ARC cache for several TBs of SATA disks?

perhaps... depends on the workload, and if the working set
can live on the L2ARC

 used mainly as astronomical images repository

hmm, perhaps two trays of 1T SATA drives all
mirrors rather than raidz sets of one tray.

ie: pls don't discount how one arranges the vdev
in a given configuration.

Rob

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zpool import crash, import degraded mirror?

2009-04-29 Thread Rob Logan

When I type `zpool import` to see what pools are out there, it gets to

/1: open(/dev/dsk/c5t2d0s0, O_RDONLY) = 6
/1: stat64(/usr/local/apache2/lib/libdevid.so.1, 0x08042758) Err#2 ENOENT
/1: stat64(/usr/lib/libdevid.so.1, 0x08042758)= 0
/1: d=0x02D90002 i=241208 m=0100755 l=1  u=0 g=2 sz=61756
/1: at = Apr 29 23:41:17 EDT 2009  [ 1241062877 ]
/1: mt = Apr 27 01:45:19 EDT 2009  [ 124089 ]
/1: ct = Apr 27 01:45:19 EDT 2009  [ 124089 ]
/1: bsz=61952 blks=122   fs=zfs
/1: resolvepath(/usr/lib/libdevid.so.1, /lib/libdevid.so.1, 1023) = 18
/1: open(/usr/lib/libdevid.so.1, O_RDONLY)= 7
/1: mmapobj(7, 0x0002, 0xFEC70640, 0x080427C4, 0x) = 0
/1: close(7)= 0
/1: memcntl(0xFEC5, 4048, MC_ADVISE, MADV_WILLNEED, 0, 0) = 0
/1: fxstat(2, 6, 0x080430C0)= 0
/1: d=0x04A0 i=5015 m=0060400 l=1  u=0 g=0 
rdev=0x01800340
/1: at = Nov 19 21:19:26 EST 2008  [ 1227147566 ]
/1: mt = Nov 19 21:19:26 EST 2008  [ 1227147566 ]
/1: ct = Apr 29 23:23:11 EDT 2009  [ 1241061791 ]
/1: bsz=8192  blks=1 fs=devfs
/1: modctl(MODSIZEOF_DEVID, 0x01800340, 0x080430BC, 0xFEC51239, 0xFE8E92C0) 
= 0
/1: modctl(MODGETDEVID, 0x01800340, 0x0038, 0x080D5A48, 0xFE8E92C0) = 0
/1: fxstat(2, 6, 0x080430C0)= 0
/1: d=0x04A0 i=5015 m=0060400 l=1  u=0 g=0 
rdev=0x01800340
/1: at = Nov 19 21:19:26 EST 2008  [ 1227147566 ]
/1: mt = Nov 19 21:19:26 EST 2008  [ 1227147566 ]
/1: ct = Apr 29 23:23:11 EDT 2009  [ 1241061791 ]
/1: bsz=8192  blks=1 fs=devfs
/1: modctl(MODSIZEOF_MINORNAME, 0x01800340, 0x6000, 0x080430BC, 
0xFE8E92C0) = 0
/1: modctl(MODGETMINORNAME, 0x01800340, 0x6000, 0x0002, 0x0808FFC8) 
= 0
/1: close(6)= 0
/1: ioctl(3, ZFS_IOC_POOL_STATS, 0x08042220)= 0

and then the machine dies consistently with:

panic[cpu1]/thread=ff01d045a3a0:
BAD TRAP: type=e (#pf Page fault) rp=ff000857f4f0 addr=260 occurred in module 
unix due to a NULL pointer dereference

zpool:
#pf Page fault
Bad kernel fault at addr=0x260
pid=576, pc=0xfb854e8b, sp=0xff000857f5e8, eflags=0x10246
cr0: 8005003bpg,wp,ne,et,ts,mp,pe cr4: 6f8xmme,fxsr,pge,mce,pae,pse,de
cr2: 260
cr3: 12b69
cr8: c

rdi:  260 rsi:4 rdx: ff01d045a3a0
rcx:0  r8:   40  r9:21ead
rax:0 rbx:0 rbp: ff000857f640
r10:  bf88840 r11: ff01d041e000 r12:0
r13:  260 r14:4 r15: ff01ce12ca28
fsb:0 gsb: ff01ce985ac0  ds:   4b
 es:   4b  fs:0  gs:  1c3
trp:e err:2 rip: fb854e8b
 cs:   30 rfl:10246 rsp: ff000857f5e8
 ss:   38

ff000857f3d0 unix:die+dd ()
ff000857f4e0 unix:trap+1752 ()
ff000857f4f0 unix:cmntrap+e9 ()
ff000857f640 unix:mutex_enter+b ()
ff000857f660 zfs:zio_buf_alloc+2c ()
ff000857f6a0 zfs:arc_get_data_buf+173 ()
ff000857f6f0 zfs:arc_buf_alloc+a2 ()
ff000857f770 zfs:dbuf_read_impl+1b0 ()
ff000857f7d0 zfs:dbuf_read+fe ()
ff000857f850 zfs:dnode_hold_impl+d9 ()
ff000857f880 zfs:dnode_hold+2b ()
ff000857f8f0 zfs:dmu_buf_hold+43 ()
ff000857f990 zfs:zap_lockdir+67 ()
ff000857fa20 zfs:zap_lookup_norm+55 ()
ff000857fa80 zfs:zap_lookup+2d ()
ff000857faf0 zfs:dsl_pool_open+91 ()
ff000857fbb0 zfs:spa_load+696 ()
ff000857fc00 zfs:spa_tryimport+95 ()
ff000857fc40 zfs:zfs_ioc_pool_tryimport+3e ()
ff000857fcc0 zfs:zfsdev_ioctl+10b ()
ff000857fd00 genunix:cdev_ioctl+45 ()
ff000857fd40 specfs:spec_ioctl+83 ()
ff000857fdc0 genunix:fop_ioctl+7b ()
ff000857fec0 genunix:ioctl+18e ()
ff000857ff10 unix:brand_sys_sysenter+1e6 ()

the offending disk, c5t2d0s0, is part of a mirror that if removed I can
see the results (from the other mirror half) and the machine does not crash.
all 8 labels look diff perfect

version=13
name='r'
state=0
txg=2110897
pool_guid=10861732602511278403
hostid=13384243
hostname='nas'
top_guid=6092190056527819247
guid=16682108003687674581
vdev_tree
type='mirror'
id=0
guid=6092190056527819247
whole_disk=0
metaslab_array=23
metaslab_shift=31
ashift=9
asize=320032473088
is_log=0
children[0]
type='disk'
id=0
guid=16682108003687674581
path='/dev/dsk/c5t2d0s0'

Re: [zfs-discuss] Motherboard for home zfs/solaris file server

2009-02-24 Thread Rob Logan


Not. Intel decided we don't need ECC memory on the Core i7 


I thought that was a Core i7 vs Xeon E55xx for socket
LGA-1366 so that's why this X58 MB claims ECC support:
http://supermicro.com/products/motherboard/Xeon3000/X58/X8SAX.cfm


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] SMART data

2008-12-08 Thread Rob Logan

the sata framework uses the sd driver so its:

4 % smartctl -d scsi -a /dev/rdsk/c4t2d0s0
smartctl version 5.36 [i386-pc-solaris2.8] Copyright (C) 2002-6 Bruce Allen
Home page is http://smartmontools.sourceforge.net/

Device: ATA  WDC WD1001FALS-0 Version: 0K05
Serial number:
Device type: disk
Local Time is: Mon Dec  8 15:14:22 2008 EST
Device supports SMART and is Enabled
Temperature Warning Disabled or Not Supported
SMART Health Status: OK

Current Drive Temperature: 45 C

Error Counter logging not supported
No self-tests have been logged

5 % /opt/SUNWhd/hd/bin/hd -e c4t2
Revision: 16
Offline status 132
Selftest status 0
Seconds to collect 19200
Time in minutes to run short selftest 2
Time in minutes to run extended selftest 221
Offline capability 123
SMART capability 3
Error logging capability 1
Checksum 0x86
Identification Status Current Worst Raw data
   1 Raw read error rate0x2f   200   2000
   3 Spin up time   0x27   253   253 6216
   4 Start/Stop count   0x32   100   100   11
   5 Reallocated sector count   0x33   200   2000
   7 Seek error rate0x2e   100   2530
   9 Power on hours count   0x32   100   100  446
  10 Spin retry count   0x32   100   2530
  11 Recalibration Retries count0x32   100   2530
  12 Device power cycle count   0x32   100   100   11
192 Power off retract count0x32   200   200   10
193 Load cycle count   0x32   200   200   11
194 Temperature0x22   105   103  45/  0/  0 (degrees C 
cur/min/max)
196 Reallocation event count   0x32   200   2000
197 Current pending sector count   0x32   200   2000
198 Scan uncorrected sector count  0x30   200   2000
199 Ultra DMA CRC error count  0x32   200   2000
200 Write/Multi-Zone Error Rate0x8200   2000


http://www.opensolaris.org/jive/thread.jspa?threadID=84296
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Inexpensive ZFS home server

2008-11-12 Thread Rob Logan

  I don't think the Pentium E2180 has the lanes to use ECC RAM.

look at the north bridge, not the cpu.. the PowerEdge SC440
uses intel 3000 MCH which supports up to 8GB unbuffered ECC
or non-ECC DDR2 667/533 SDRAM. its been replaced with
the intel 32x0 that uses DDR2 800/667MHz unbuffered ECC /
non-ECC SDRAM.

http://www.intel.com/products/server/chipsets/3200-3210/3200-3210-overview.htm

Rob
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS + OpenSolaris for home NAS?

2008-10-29 Thread Rob Logan

  ECC?

$60 unbuffered 4GB 800MHz DDR2 ECC CL5 DIMM (Kit Of 2)
http://www.provantage.com/kingston-technology-kvr800d2e5k2-4g~7KIN90H4.htm

for Intel 32x0 north bridge like
http://www.provantage.com/supermicro-x7sbe~7SUPM11K.htm
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs equivalent of ufsdump and ufsrestore

2008-05-30 Thread Rob Logan
  I'd like to take a backup of a live filesystem without modifying
  the last  accessed time.

why not take a snapshot?

Rob
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs equivalent of ufsdump and ufsrestore

2008-05-30 Thread Rob Logan
  Is there a way to efficiently  replicating a complete zfs-pool
  including all filesystems and snapshots?

zfs send -R

  -R Generate a  replication  stream  package,
 which   will   replicate   the  specified
 filesystem, and all descendant file  sys-
 tems,  up  to  the  named  snapshot. When
 received, all properties, snapshots, des-
 cendent  file  systems,  and  clones  are
 preserved.

 If the -i or -I flags are  used  in  con-
 junction with the -R flag, an incremental
 replication  stream  is  generated.   The
 current values of properties, and current
 snapshot and file system  names  are  set
 when  the  stream is received.  If the -F
 flag is specified  when  this  stream  is
 recieved, snapshots and file systems that
 do not exist on the sending side are des-
 troyed.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] What is a vdev?

2008-05-30 Thread Rob Logan
  making all the drives in a *zpool* the same size.
The only issue of having vdevs of diffrent sizes is when
one fills up, reducing the strip size for writes.

  making all the drives in a *vdev* (of almost any type) the same
The only issue is the unused space of the largest device, but
then we call that short stroke for speed :-)

 Yes you can add 4 1TB drives just as you described.
yup..

Rob

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] slog failure ... *ANY* way to recover?

2008-05-30 Thread Rob Logan
  1) and l2arc or log device needs to evacuation-possible

how about evacuation of any vdev? (pool shrink!)

  2) any failure of a l2arc or log device should never prevent
  importation of a pool.

how about import or creation of any kinda degraded pool?

Rob
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] is mirroring a vdev possible?

2008-05-30 Thread Rob Logan
  replace a current raidz2 vdev with a mirror.

your asking for vdev removal or pool shrink which isn't
finish yet.

Rob
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS in S10U6 vs openSolaris 05/08

2008-05-27 Thread Rob Logan
  There is something more to consider with SSDs uses as a cache device.
why use SATA as the interface? perhaps
http://www.tgdaily.com/content/view/34065/135/
would be better? (no experience)

cards will start at 80 GB and will scale to 320 and 640 GB next year.
By the end of 2008, Fusion io also hopes to roll out a 1.2 TB card.
160 parallel pipelines that can read data at 800 megabytes per second
and write at 600 MB/sec 4K blocks and then streaming eight
simultaneous 1 GB reads and writes.  In that test, the ioDrive
clocked in at 100,000 operations per second...  beat $30 dollars a GB,
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs raidz2 configuration mistake

2008-05-21 Thread Rob Logan
  1)  Am I right in my reasoning?
yes

  2)  Can I remove the new disks from the pool, and re-add them under the 
  raidz2 pool
copy the data off the pool, destroy and remake the pool, and copy back

  3)  How can I check how much zfs data is written on the actual disk (say 
  c12)?
zpool iostat -v


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] opensolaris 2008.05 boot recovery

2008-05-20 Thread Rob Logan
  would do and booted from the CD. OK, now I zpool imported rpool,
  modified [], exported the pool, and rebooted.

the oops part is the exported the pool as a reboot after editing
would have worked as expected so rpool wasn't marked as exported

so boot from the cdrom again, zpool import rpool
mount -F zfs rpool/root /mnt
fix /mnt/etc/shadow as before, but then
cp /etc/zfs/zpool.cache /mnt/etc/zfs
/usr/sbin/bootadm update-archive -R /mnt
reboot

Rob

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] question about zpool import

2008-05-20 Thread Rob Logan
type:

zpool import 11464983018236960549 rpool.old
zpool import -f mypool
zpool upgrade -a
zfs   upgrade -a
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] replace, restart, gone - HELP!

2008-05-20 Thread Rob Logan
  There's also a spare attached to the pool that's not showing here.

can you make it show?

Rob
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] replace, restart, gone - HELP!

2008-05-20 Thread Rob Logan
  How do I go about making it show?

zdb -e exported_pool_name

will show the children's paths and find the path of the spare
that's missing and once you get it to shows up you can import the pool.

Rob
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] cp -r hanged copying a directory

2008-05-02 Thread Rob Logan
or work around the NCQ bug in the drive's FW by typing:

su
echo set sata:sata_max_queue_depth = 0x1  /etc/system
reboot

Rob
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] cp -r hanged copying a directory

2008-05-01 Thread Rob Logan

hmm, three drives with 35 io requests in the queue
and none active? remind me not to buy a drive
with that FW..

1) upgrade the FW in the drives or

2) turn off NCQ with:
echo set sata:sata_max_queue_depth = 0x1  /etc/system


Rob
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] cp -r hanged copying a directory

2008-04-28 Thread Rob Logan
  I did the cp -r dir1 dir2 again and when it hanged

when its hung, can you type:  iostat -xce 1
in another window and is there a 100 in the %b column?
when you reset and try the cp again, and look at
iostat -xce 1 on the second hang, is the same disk at 100 in %b?

if all your windows are hung, does your keyboard num locks LED
track the num locks key?

Rob

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs send/recv question

2008-03-06 Thread Rob Logan
  Because then I have to compute yesterday's date to do the  
incremental dump.

snaps=15
today=`date +%j`
# to change the second day of the year from 002 to 2
today=`expr $today + 0`
nuke=`expr $today - $snaps`
yesterday=`expr $today - 1`

if [ $yesterday -lt 1 ] ; then
   yesterday=365
fi

if [ $nuke -lt 1 ] ; then
   nuke=`expr 365 + $nuke`
fi

zfs destroy  -r [EMAIL PROTECTED]
zfs snapshot -r [EMAIL PROTECTED]
zfs destroy  -r [EMAIL PROTECTED]
zfs snapshot -r [EMAIL PROTECTED]

zfs send -i z/[EMAIL PROTECTED]  z/[EMAIL PROTECTED]  | bzip2 -c |\
   ssh host.com bzcat | zfs recv -v -F -d z
zfs send -i z/[EMAIL PROTECTED]  z/[EMAIL PROTECTED]  | bzip2 -c |\
   ssh host.com bzcat | zfs recv -v -F -d z
zfs send -i z/[EMAIL PROTECTED]z/[EMAIL PROTECTED]| bzip2 -c |\
ssh host.com bzcat | zfs recv -v -F -d z

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] What is likely the best way to accomplish this task?

2008-03-04 Thread Rob Logan
  have 4x500G disks in a RAIDZ.  I'd like to repurpose [...] as the second
  half of a mirror in a machine going into colo.

rsync or zfs send -R the 128G to the machine going to the colo

if you need more space in colo, remove one disk faulting sys1
and add (stripe) it on colo (note: you will need to destroy the pool
on colo after copying everything back to attach rather than add the
disk going to colo)

destroy and remake sys1 as 2+1 and copy it back.

removing a vdev is coming, but it will be a hole vdev, ie: remove
the 3+1 after you added a 2+1.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Which DTrace provider to use

2008-02-13 Thread Rob Logan
  Way crude, but effective enough:

kinda cool, but isn't thats what
sar -f /var/adm/sa/sa`date +%d` -A | grep -v ,
is for?  crontab -e sys
to start..

for more fun
acctadm -e extended -f /var/adm/exacct/proc process

Rob

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] hardware for zfs home storage

2008-01-14 Thread Rob Logan
   appears to have unlimited backups for 4.95 a month.

http://rsync.net/ $1.60 per month per G  (no experience)

to keep this more ontopic and not spam like. what about [home]
backups??.. what's the best deal for you:

1) a 4+1 (space) or 2*(2+1) (speed) 64bit 4G+ zfs nas
   (data for old thread topic :-)

2) same nas but rsync to a 3+0 pool kept remote, done periodically

3) same nas but rsync to a service

how large are physical and code base risks?

Rob

ps: sorry about hijacking the thread..
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Panic on Zpool Import (Urgent)

2008-01-13 Thread Rob Logan
as its been pointed out it likely 6458218
but a zdb -e poolname
will tell you alittle more

Rob

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] What does dataset is busy actually mean? [creating snap]

2008-01-12 Thread Rob Logan

  what causes a dataset to get into this state?

while I'm not exactly sure, I do have the steps leading up to when
I saw it trying to create a snapshot. ie:

10 % zfs snapshot z/b80nd/[EMAIL PROTECTED]
cannot create snapshot 'z/b80nd/[EMAIL PROTECTED]': dataset is busy
13 % mount -F zfs z/b80nd/var /z/b80nd/var
mount: Mount point /z/b80nd/var does not exist.
14 % mount -F zfs z/b80nd/var /mnt
15 % zfs snapshot -r z/[EMAIL PROTECTED]
16 % zfs list | grep 0107
root/0107nd455M   107G  6.03G  legacy
root/[EMAIL PROTECTED] 50.5M  -  6.02G  -
z/[EMAIL PROTECTED]0  -   243M  -
z/b80nd/[EMAIL PROTECTED]0  -  1.18G  -
z/b80nd/[EMAIL PROTECTED]0  -  2.25G  -
z/b80nd/[EMAIL PROTECTED]0  -  56.3M  -

running 64bit opensol-20080107 on intel

to get there I was walking through this cookbook:

zfs snapshot root/[EMAIL PROTECTED]
zfs clone root/[EMAIL PROTECTED] root/0107nd
cat /etc/vfstab | sed s/^root/#root/ | sed s/^z/#z/   /root/0107nd/ 
etc/vfstab
echo root/0107nd - / zfs - no -  /root/0107nd/etc/vfstab
cat /root/0107nd/etc/vfstab
zfs snapshot -r z/[EMAIL PROTECTED]
rsync -a --del --verbose /usr/.zfs/snapshot/dump/ /root/0107nd/usr
rsync -a --del --verbose /opt/.zfs/snapshot/dump/ /root/0107nd/opt
rsync -a --del --verbose /var/.zfs/snapshot/dump/ /root/0107nd/var
zfs set mountpoint=legacy root/0107nd
zpool set bootfs=root/0107nd root
reboot

mkdir -p /z/tmp/bfu ; cd /z/tmp/bfu
wget http://dlc.sun.com/osol/on/downloads/20080107/SUNWonbld.i386.tar.bz2
bzip2 -d -c SUNWonbld.i386.tar.bz2 | tar -xvf -
pkgadd -d onbld
wget 
http://dlc.sun.com/osol/on/downloads/20080107/on-bfu-nightly-osol-nd.i386.tar.bz2
bzip2 -d -c on-bfu-nightly-osol-nd.i386.tar.bz2 | tar -xvf -
setenv FASTFS /opt/onbld/bin/i386/fastfs
setenv BFULD /opt/onbld/bin/i386/bfuld
setenv GZIPBIN /usr/bin/gzip
/opt/onbld/bin/bfu /z/tmp/bfu/archives-nightly-osol-nd/i386
/opt/onbld/bin/acr
echo etc/zfs/zpool.cache  /boot/solaris/filelist.ramdisk  ; echo bug  
in bfu
reboot

rm -rf /bfu* /.make* /.bfu*
zfs snapshot root/[EMAIL PROTECTED]
mount -F zfs z/b80nd/var /mnt  ; echo bug in zfs
zfs snapshot -r z/[EMAIL PROTECTED]
zfs clone z/[EMAIL PROTECTED] z/0107nd
zfs set compression=lzjb z/0107nd
zfs clone z/b80nd/[EMAIL PROTECTED] z/0107nd/usr
zfs clone z/b80nd/[EMAIL PROTECTED] z/0107nd/var
zfs clone z/b80nd/[EMAIL PROTECTED] z/0107nd/opt
rsync -a --del --verbose /.zfs/snapshot/dump/ /z/0107nd
zfs set mountpoint=legacy z/0107nd/usr
zfs set mountpoint=legacy z/0107nd/opt
zfs set mountpoint=legacy z/0107nd/var
echo z/0107nd/usr - /usr zfs - yes -  /etc/vfstab
echo z/0107nd/var - /var zfs - yes -  /etc/vfstab
echo z/0107nd/opt - /opt zfs - yes -  /etc/vfstab
reboot

heh heh, booting from a clone of a clone... waisted space under
root/`uname -v`/usr for a few libs needed at boot, but having
/usr /var /opt on the compressed pool with two raidz vdevs boots
to login in 45secs rather than 52secs on the single vdev root pool.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] NCQ

2008-01-09 Thread Rob Logan

fun example that shows NCQ lowers wait and %w, but doesn't have
much impact on final speed. [scrubbing, devs reordered for clarity]

 extended device statistics
devicer/sw/s   kr/skw/s wait actv  svc_t  %w  %b
sd2 454.70.0 47168.00.0  0.0  5.7   12.6   0  74
sd4 440.70.0 45825.90.0  0.0  5.5   12.4   0  78
sd6 445.70.0 46239.20.0  0.0  6.6   14.7   0  79
sd7 452.70.0 46850.70.0  0.0  6.0   13.3   0  79
sd8 460.70.0 46947.70.0  0.0  5.5   11.8   0  73
sd3 426.70.0 43726.40.0  5.6  0.8   14.9  73  79
sd5 424.70.0 44456.40.0  6.6  0.9   17.7  83  90
sd9 430.70.0 44266.50.0  5.8  0.8   15.5  78  84
sd10421.70.0 44451.40.0  6.3  0.9   17.1  80  87
sd11421.70.0 44196.10.0  5.8  0.8   15.8  75  80

   capacity operationsbandwidth
pool used  avail   read  write   read  write
--  -  -  -  -  -  -
z   1.06T  3.81T  2.92K  0   360M  0
  raidz1 564G  2.86T  1.51K  0   187M  0
c0t1d0  -  -457  0  47.3M  0
c1t1d0  -  -457  0  47.4M  0
c0t6d0  -  -456  0  47.4M  0
c0t4d0  -  -458  0  47.4M  0
c1t3d0  -  -463  0  47.3M  0
  raidz1 518G   970G  1.40K  0   174M  0
c1t4d0  -  -434  0  44.7M  0
c1t6d0  -  -433  0  45.3M  0
c0t3d0  -  -445  0  45.3M  0
c1t5d0  -  -427  0  44.4M  0
c0t5d0  -  -424  0  44.3M  0
--  -  -  -  -  -  -




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs panic on boot

2008-01-03 Thread Rob Logan
  space_map_add+0xdb(ff014c1a21b8, 472785000, 1000)
  space_map_load+0x1fc(ff014c1a21b8, fbd52568, 1,  
ff014c1a1e88, ff0149c88c30)
  running snv79.

hmm.. did you spend any time in snv_74 or snv_75 that might
have gotten http://bugs.opensolaris.org/view_bug.do?bug_id=6603147

zdb -e name_of_pool_that_crashes_on_import
would be interesting, but the damage might have been done.

Rob
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Question - does a snapshot of root include child

2007-12-20 Thread Rob Logan
  I've only started using ZFS this week, and hadn't even touched a Unix

welcome to ZFS... here is a simple script you can start with:

#!/bin/sh

snaps=15
today=`date +%j`
nuke=`expr $today - $snaps`
yesterday=`expr $today - 1`

if [ $yesterday -lt 0 ] ; then
   yesterday=365
fi

if [ $nuke -lt 0 ] ; then
   nuke=`expr 365 + $nuke`
fi

zfs destroy  -r [EMAIL PROTECTED]
zfs snapshot -r [EMAIL PROTECTED]
zfs destroy  -r [EMAIL PROTECTED]
zfs snapshot -r [EMAIL PROTECTED]

zfs send -i z/[EMAIL PROTECTED]  z/[EMAIL PROTECTED]  | bzip2 -c | ssh  
some.host.com bzcat | zfs recv -v -F -d z
zfs send -i z/[EMAIL PROTECTED]z/[EMAIL PROTECTED]| bzip2 -c | ssh  
some.host.com bzcat | zfs recv -v -F -d z
zfs send -i z/[EMAIL PROTECTED] z/[EMAIL PROTECTED] | bzip2 -c | ssh  
some.host.com bzcat | zfs recv -v -F -d z

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Fwd: zfs boot suddenly not working

2007-12-18 Thread Rob Logan

  bootfs rootpool/rootfs

does grep zfs /mnt/etc/vfstab look like:

rootpool/rootfs- /   zfs -   no  -

(bet it doesn't... edit like above and reboot)

or second guess (well, third :-) is your theory that
can be checked with:

zpool import rootpool
zpool import datapool
mkdir /mnt
mount -F zfs rootpool/rootfs /mnt
tail /mnt/boot/solaris/filelist.ramdisk
echo look for (no leading /)   etc/zfs/zpool.cache
cp /etc/zfs/zpool.cache /mnt/etc/zfs/zpool.cache
/usr/sbin/bootadm update-archive -R /mnt
reboot






___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Fwd: zfs boot suddenly not working

2007-12-18 Thread Rob Logan

   I guess the zpool.cache in the bootimage got corrupted?
not on zfs :-)   perhaps a path to a drive changed?

Rob

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] JBOD performance

2007-12-17 Thread Rob Logan
  r/sw/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
  0.0   48.00.0 3424.6  0.0 35.00.0  728.9   0 100 c2t8d0

  That service time is just terrible!

yea, that service time is unreasonable. almost a second for each
command? and 35 more commands queued? (reorder = faster)

I had a server with similar service times, so I repaired
a replacement blade and when I went to slid it in, noticed a
loud noise coming from the blade below it.. notified the windows
person who owned it and it had been broken for some time
and turned it off... it was much better after that.

vibration... check vibration.

Rob
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] install notes for zfs root.

2007-11-29 Thread Rob Logan

After a fresh SMI labeled c0t0d0s0 / swap /export/home jumpstart
in /etc check
   hostname.e1000g0 defaultrouter netmasks resolv.conf nsswitch.conf
   services hosts
   coreadm.conf acctadm.conf dumpadm.conf named.conf rsync.conf
svcadm disable fc-cache cde-login cde-calendar-manager cde-printinfo
svcadm disable sendmail rfc1179 gss ktkt_warn autofs hal ndp
svcadm disable rmvolmgr smserver power wbem webconsole
svcadm enable rstat
svccfg -s bind setprop config/local_only = false
crontab -e sys
pkgadd -d http://www.blastwave.org/pkg_get.pkg

# make zfs pools of remaining disks
umount /export/home
format ; echo copy the c0t0d0 SMI partition label to c0t1d0
zpool create -f z mirror c0t0d0s7 c0t1d0s7 mirror c0t2d0 c0t3d0 mirror c0t4d0 
c0t5d0
zfs set compression=lzjb z
zpool create -f root c0t1d0s0
mkdir -p /root/boot/grub
cp /boot/grub/menu.lst /root/boot/grub/menu.lst
vi /root/boot/grub/menu.lst  ; echo replace bottom with (remove ,console=ttya)

title Solaris ZFS disk0
root (hd0,0,a)
kernel$ /platform/i86pc/kernel/$ISADIR/unix -v -B $ZFS-BOOTFS,console=ttya
module$ /platform/i86pc/$ISADIR/boot_archive

title Solaris ZFS disk1
root (hd1,0,a)
kernel$ /platform/i86pc/kernel/$ISADIR/unix -v -B $ZFS-BOOTFS,console=ttya
module$ /platform/i86pc/$ISADIR/boot_archive

title Solaris ZFS failsafe
kernel /boot/platform/i86pc/kernel/unix -v -B console=ttya
module /boot/x86.miniroot-safe

# (hdx,x,x) where the first entry in the tuple is the disk identifier,
# second entry is the partition number (0-3), and the third entry is the
# slice number (a-h), where a is slice 0 and h is slice 7.

zfs create root/snv_77 ; cd /root/snv_77
ufsdump 0fs - 99 / | ufsrestore -rf -
echo etc/zfs/zpool.cache  boot/solaris/filelist.ramdisk
echo /dev/dsk/c0t1d0s1 -  -  swap -   no  -  etc/vfstab
echo root/snv_77   -  /  zfs  -   no  -  etc/vfstab
vi etc/vfstab  ; echo remove / and /export/home entrys
/usr/sbin/bootadm update-archive -R /root/snv_77
installgrub boot/grub/stage1 boot/grub/stage2 /dev/rdsk/c0t1d0s0
cd / ; zfs set mountpoint=legacy root/snv_77
zpool set bootfs=root/snv_77 root
reboot

# change BIOS boot order from c0t0d0s0 to c0t1d0s0 first
# go back to ufs boot by returning BIOS boot order
# note: if c0t1d0s0 is BIOS first to boot, grub will call it (hd0,0,a)

# make proto for bfu and boot it
zpool attach -f root c0t1d0s0 c0t0d0s0
zfs snapshot root/[EMAIL PROTECTED]
zfs cloneroot/[EMAIL PROTECTED] root/opensol-20071126-nd
vi /root/opensol-20071126-nd/etc/vfstab ; echo change root/snv_77 to 
root/opensol-20071126-nd
zfs set mountpoint=legacy root/opensol-20071126-nd
zpool set bootfs=root/opensol-20071126-nd root
reboot

# upgrade to current bits
mkdir -p /z/tmp/bfu ; cd /z/tmp/bfu
wget http://dlc.sun.com/osol/on/downloads/20071126/SUNWonbld.i386.tar.bz2
wget 
http://dlc.sun.com/osol/on/downloads/20071126/on-bfu-nightly-osol-nd.i386.tar.bz2
bzip2 -d -c SUNWonbld.i386.tar.bz2 | tar -xvf -
cd onbld ; pkgadd -d .
bzip2 -d -c on-bfu-nightly-osol-nd.i386.tar.bz2 | tar -xvf -
/opt/onbld/bin/bfu /z/tmp/bfu/archives-nightly-osol-nd/i386
/opt/onbld/bin/acr
reboot

# spread the load across all spindles in compressed form
rm -rf /.make.machines /z/tmp/bfu /bfu*
zfs create z/opensol-20071126-nd
zfs create z/opensol-20071126-nd/usr
zfs create z/opensol-20071126-nd/opt
zfs create z/opensol-20071126-nd/var
zfs snapshot root/[EMAIL PROTECTED]
cd /.zfs/snapshot/dump
rsync -a . /z/opensol-20071126-nd
zfs set mountpoint=legacy z/opensol-20071126-nd/usr
zfs set mountpoint=legacy z/opensol-20071126-nd/opt
zfs set mountpoint=legacy z/opensol-20071126-nd/var
echo z/opensol-20071126-nd/usr - /usrzfs  -yes  -  /etc/vfstab
echo z/opensol-20071126-nd/var - /varzfs  -yes  -  /etc/vfstab
echo z/opensol-20071126-nd/opt - /optzfs  -yes  -  /etc/vfstab
reboot

# play with l2arc
zfs promote root/opensol-20071126-nd
zfs destroy -r root/snv_77
zpool upgrade -a
installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c0t1d0s0
installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c0t0d0s0

# other notes
vi /etc/ssh/sshd_config
svcadm disable ssh ; svcadm enable ssh
mkdir .ssh ; cd .ssh ; chmod 700 .
ssh-keygen -t dsa -f id_dsa -P ''
scp id_dsa.pub [EMAIL PROTECTED] ~/.ssh/authorized_keys2 ; chmod  600 
authorized_keys2
zfs snapshot -r [EMAIL PROTECTED]
zfs send -R z/[EMAIL PROTECTED] | ssh 10.1.1.7 zfs recv -v -d z
zfs send -i z/[EMAIL PROTECTED] z/[EMAIL PROTECTED] | ssh 10.1.1.7 zfs recv -v 
-F -d z

http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide
eeprom kernelbase=0x8000 ; echo  for 32bit cpu
echo set kmem_flags = 0x0  /etc/system ; echo less mem asserts
echo set zfs:zfs_prefetch_disable = 1  /etc/system ; echo don't make all 
read requests 64kB
echo set zfs:zil_disable = 1  /etc/system  ; echo to make NFS sync async
echo set zfs:zfs_nocacheflush = 1  /etc/system ; echo don't bother flushing 
the disk cache
echo set sata:sata_func_enable = 0x5  /etc/system ; echo to turn off NCQ
echo 

Re: [zfs-discuss] Home Motherboard

2007-11-22 Thread Rob Logan
here is a simple layout for 6 disks toward speed :

/dev/dsk/c0t0d0s1 -  - swap-  no  -
/dev/dsk/c0t1d0s1 -  - swap-  no  -
root/snv_77   -  / zfs -  no  -
z/snv_77/usr  -  /usr  zfs -  yes -
z/snv_77/var  -  /var  zfs -  yes -
z/snv_77/opt  -  /opt  zfs -  yes -


root
@test[2:25pm]/root/boot/ 
grub 

27 
  % zpool iostat -v
  capacity operationsbandwidth
pool   used  avail   read  write   read  write
  -  -  -  -  -  -
root  4.39G  15.1G179  1  3.02M  16.0K
   mirror  4.39G  15.1G179  1  3.02M  16.0K
 c0t1d0s0  -  - 62  1  3.35M  22.3K
 c0t0d0s0  -  - 61  1  3.35M  22.3K
  -  -  -  -  -  -
z  319G   421G  1.15K 17  81.9M  83.8K
   mirror   113G   163G418  8  28.8M  40.8K
 c0t0d0s7  -  -272  2  29.4M  48.2K
 c0t1d0s7  -  -272  2  29.5M  48.2K
   mirror   103G   129G376  4  26.5M  21.4K
 c0t2d0-  -250  3  27.1M  28.9K
 c0t3d0-  -250  2  27.1M  28.9K
   mirror   104G   128G380  4  26.6M  21.6K
 c0t4d0-  -253  2  27.1M  29.0K
 c0t5d0-  -252  2  27.1M  29.0K
  -  -  -  -  -  -

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Home Motherboard

2007-11-22 Thread Rob Logan

  with 4 cores and 2-4G of ram.

not sure 2G is enough... at least with 64bit there are no kernel space  
issues.

6 % echo '::memstat' | mdb -k
Page SummaryPagesMB  %Tot
     
Kernel 692075  2703   66%
Anon33265   1293%
Exec and libs8690331%
Page cache   1143 40%
Free (cachelist) 3454130%
Free (freelist)307400  1200   29%

Total 1046027  4086
Physical  1046026  4086

this tree on the 64bit v20z box:

Page SummaryPagesMB  %Tot
     
Kernel 668799  2612   85%
Anon38477   1505%
Exec and libs4881191%
Page cache   5363201%
Free (cachelist) 7566291%
Free (freelist) 59052   2308%

Total  784138  3063
Physical   784137  3063

and the same tree on a 32bit box:

Page SummaryPagesMB  %Tot
     
Kernel 261359  1020   33%
Anon52314   2047%
Exec and libs   12245472%
Page cache   9885381%
Free (cachelist) 6816261%
Free (freelist) 7408819027247  240518168576  593093167776006144%

Total  784266  3063
Physical   784265  3063


from http://www.opensolaris.org/jive/message.jspa?messageID=173580

8 % ./zfs-mem-used
checking pool map size [B]: root
358424
checking pool map size [B]: z
4162512

9 % cat zfs-mem-used
#!/bin/sh

echo '::spa' | mdb -k | grep ACTIVE \
  | while read pool_ptr state pool_name
do
  echo checking pool map size [B]: $pool_name

  echo ${pool_ptr}::walk metaslab|::print -d struct metaslab  
ms_smo.smo_objsize \
| mdb -k \
| nawk '{sub(^0t,,$3);sum+=$3}END{print sum}'
done

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Home Motherboard

2007-11-21 Thread Rob Logan
grew tired of the recycled 32bit cpus in
http://www.opensolaris.org/jive/thread.jspa?messageID=127555

and bought this to put the two marvell88sx cards in:
$255 http://www.supermicro.com/products/motherboard/Xeon3000/3210/X7SBE.cfm
  http://www.supermicro.com/manuals/motherboard/3210/MNL-0970.pdf
$195 1333FSB 2.6GHz Xeon 3075 (basicly a E6750)
  Any Core 2 Quad/Duo in LGA775 will work, including 45nm dies:
  http://rob.com/sun/x7sbe/45nm-pricing.jpg
$270 Four 1G PC2-6400 DDRII 800MHz 240-pin ECC Unbuffered SDRAM
$ 55 LOM (IPMI and Serial over LAN)
  http://www.supermicro.com/manuals/other/AOC-SIMSOLC-HTC.pdf

# /usr/X11/bin/scanpci
pci bus 0x cardnum 0x00 function 0x00: vendor 0x8086 device 0x29f0
  Intel Corporation Server DRAM Controller

pci bus 0x cardnum 0x01 function 0x00: vendor 0x8086 device 0x29f1
  Intel Corporation Server Host-Primary PCI Express Bridge

pci bus 0x cardnum 0x1a function 0x00: vendor 0x8086 device 0x2937
  Intel Corporation USB UHCI Controller #4

pci bus 0x cardnum 0x1a function 0x01: vendor 0x8086 device 0x2938
  Intel Corporation USB UHCI Controller #5

pci bus 0x cardnum 0x1a function 0x02: vendor 0x8086 device 0x2939
  Intel Corporation USB UHCI Controller #6

pci bus 0x cardnum 0x1a function 0x07: vendor 0x8086 device 0x293c
  Intel Corporation USB2 EHCI Controller #2

pci bus 0x cardnum 0x1c function 0x00: vendor 0x8086 device 0x2940
  Intel Corporation PCI Express Port 1

pci bus 0x cardnum 0x1c function 0x04: vendor 0x8086 device 0x2948
  Intel Corporation PCI Express Port 5

pci bus 0x cardnum 0x1c function 0x05: vendor 0x8086 device 0x294a
  Intel Corporation PCI Express Port 6

pci bus 0x cardnum 0x1d function 0x00: vendor 0x8086 device 0x2934
  Intel Corporation USB UHCI Controller #1

pci bus 0x cardnum 0x1d function 0x01: vendor 0x8086 device 0x2935
  Intel Corporation USB UHCI Controller #2

pci bus 0x cardnum 0x1d function 0x02: vendor 0x8086 device 0x2936
  Intel Corporation USB UHCI Controller #3

pci bus 0x cardnum 0x1d function 0x07: vendor 0x8086 device 0x293a
  Intel Corporation USB2 EHCI Controller #1

pci bus 0x cardnum 0x1e function 0x00: vendor 0x8086 device 0x244e
  Intel Corporation 82801 PCI Bridge

pci bus 0x cardnum 0x1f function 0x00: vendor 0x8086 device 0x2916
  Intel Corporation  Device unknown

pci bus 0x cardnum 0x1f function 0x02: vendor 0x8086 device 0x2922
  Intel Corporation 6 port SATA AHCI Controller

pci bus 0x cardnum 0x1f function 0x03: vendor 0x8086 device 0x2930
  Intel Corporation SMBus Controller

pci bus 0x cardnum 0x1f function 0x06: vendor 0x8086 device 0x2932
  Intel Corporation Thermal Subsystem

pci bus 0x0001 cardnum 0x00 function 0x00: vendor 0x8086 device 0x0329
  Intel Corporation 6700PXH PCI Express-to-PCI Bridge A

pci bus 0x0001 cardnum 0x00 function 0x01: vendor 0x8086 device 0x0326
  Intel Corporation 6700/6702PXH I/OxAPIC Interrupt Controller A

pci bus 0x0001 cardnum 0x00 function 0x02: vendor 0x8086 device 0x032a
  Intel Corporation 6700PXH PCI Express-to-PCI Bridge B

pci bus 0x0001 cardnum 0x00 function 0x03: vendor 0x8086 device 0x0327
  Intel Corporation 6700PXH I/OxAPIC Interrupt Controller B

pci bus 0x0003 cardnum 0x02 function 0x00: vendor 0x11ab device 0x6081
  Marvell Technology Group Ltd. MV88SX6081 8-port SATA II PCI-X  
Controller

pci bus 0x000d cardnum 0x00 function 0x00: vendor 0x8086 device 0x108c
  Intel Corporation 82573E Gigabit Ethernet Controller (Copper)

pci bus 0x000f cardnum 0x00 function 0x00: vendor 0x8086 device 0x109a
  Intel Corporation 82573L Gigabit Ethernet Controller

pci bus 0x0011 cardnum 0x04 function 0x00: vendor 0x1002 device 0x515e
  ATI Technologies Inc ES1000

# cfgadm -a
Ap_Id  Type Receptacle   Occupant
Condition
pcie5  etherne/hp   connectedconfigured   ok
pcie6  etherne/hp   connectedconfigured   ok
sata0/0::dsk/c0t0d0disk connectedconfigured   ok
sata0/1::dsk/c0t1d0disk connectedconfigured   ok
sata0/2::dsk/c0t2d0disk connectedconfigured   ok
sata0/3::dsk/c0t3d0disk connectedconfigured   ok
sata0/4::dsk/c0t4d0cd/dvd   connectedconfigured   ok
sata1/0sata-portemptyunconfigured ok
sata1/1sata-portemptyunconfigured ok
sata1/2sata-portemptyunconfigured ok
sata1/3sata-portemptyunconfigured ok
sata1/4sata-portemptyunconfigured ok
sata1/5sata-portemptyunconfigured ok
sata1/6sata-portemptyunconfigured ok
sata1/7::dsk/c1t7d0disk connectedconfigured   ok
usb0/1 unknown  empty

Re: [zfs-discuss] which would be faster

2007-11-20 Thread Rob Logan

  On the other hand, the pool of 3 disks is obviously
  going to be much slower than the pool of 5

while today that's true, someday io will be
balanced by the latency of vdevs rather than
the number... plus two vdevs are always going
to be faster than one vdev, even if one is slower
than the other.

so do 4+1 and 2+1 in the same pool rather than
separate pools. this will let zfs balance
the load (always) between the two vdevs rather than
you trying the balance the load between pools.

Rob

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Backport of vfs_zfsacl.c to samba 3.0.26a, [and NexentaStor]

2007-11-02 Thread Rob Logan

I'm confused by this and NexentaStor... wouldn't it be better
to use b77? with:

Heads Up: File system framework changes (supplement to CIFS' head's up)
Heads Up: Flag Day (Addendum) (CIFS Service)
Heads Up: Flag Day (CIFS Service)
caller_context_t in all VOPs - PSARC/2007/218
VFS Feature Registration and ACL on Create - PSARC/2007/227
ZFS Case-insensitive support - PSARC/2007/244
Extensible Attribute Interfaces - PSARC/2007/315
ls(1) new command line options '-/' and '-%': CIFS system attributes support - 
PSARC/2007/394
Modified Access Checks for CIFS - PSARC/2007/403
Add system attribute support to chmod(1) - PSARC/2007/410
CIFS system attributes support for cp(1), pack(1), unpack(1), compress(1) and 
uncompress(1) - PSARC/2007/432
Rescind SETTABLE Attribute - PSARC/2007/444
CIFS system attributes support for cpio(1), pax(1), tar(1) - PSARC/2007/459
Update utilities to match CIFS system attributes changes. - PSARC/2007/546
ZFS sharesmb property - PSARC/2007/560
VFS Feature Registration and ACL on Create - PSARC/2007/227
Extensible Attribute Interfaces - PSARC/2007/315
Extensible Attribute Interfaces - PSARC/2007/315
Extensible Attribute Interfaces - PSARC/2007/315
Extensible Attribute Interfaces - PSARC/2007/315
CIFS Service - PSARC/2006/715


http://www.opensolaris.org/os/community/on/flag-days/all/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs: allocating allocated segment (offset=

2007-10-12 Thread Rob Logan

  I suspect that the bad ram module might have been the root
  cause for that freeing free segment zfs panic,

perhaps I removed two 2G simms but left the two 512M
simms, also removed kernelbase but the zpool import
still crashed the machine.

its also registered ECC ram, memtest86 v1.7 didn't
find anything yet, but I'll let it go overnight.

Rob
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Mountroot and Bootroot Comparison

2007-10-05 Thread Rob Logan
  I'm not surprised that having /usr in a separate pool failed.

while this is discouraging, (I have several b62 machines with
root mirrored and /usr on raidz) if booting from raidz
is a pri, and comes soon, at least I'd be happy :-)

Rob
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] 8+2 or 8+1+spare?

2007-07-09 Thread Rob Logan

  which is better 8+2 or 8+1+spare?

8+2 is safer for the same speed
8+2 requires alittle more math, so its slower in theory. (unlikely seen)
(4+1)*2 is 2x faster, and in theory is less likely to have wasted space
 in transaction group (unlikely seen)
(4+1)*2 is cheaper to upgrade in place because of its fewer elements

so, Mr (no scale on the time access) Elling: so what's the MTTDL
between theses three?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS on 32-bit...

2007-06-30 Thread Rob Logan

 How does eeprom(1M) work on the Xeon that the OP said he has?

its faked via /boot/solaris/bootenv.rc
built into /platform/i86pc/$ISADIR/boot_archive
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS on 32-bit...

2007-06-29 Thread Rob Logan

 issues does ZFS have with running in only 32-bit mode?

with less then 2G ram, no worry... with more then 3G ram
and you don't need mem in userspace, give it to the kernel
in virtual memory for zfs cache by moving the kernelbase...
eeprom kernelbase=0x8000
or for only 1G userland:
eeprom kernelbase=0x5000

http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=4985055


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Suggestions on 30 drive configuration?

2007-06-26 Thread Rob Logan

 an array of 30 drives in a RaidZ2 configuration with two hot spares
 I don't want to mirror 15 drives to 15 drives

ok, so space over speed... and are willing to toss somewhere between 4
and 15 drives for protection.

raidz splits the (up to 128k) write/read recordsize into each element of
the raidz set.. (ie: all drives must be touched and all must finish
before the block request is complete)  so with a 9 disk raid1z set that's
(8 data + 1 parity (8+1)) or 16k per disk for a full 128k write. or for
a smaller 4k block, that a single 512b sector per disk. on a 26+2 raid2z
set that 4k block would still use 8 disks, with the other 18 disks
unneeded but allocated.

so perhaps three sets of 8+2 would let three blocks be read/written to
at once with a total of 6 disks for protection.

but for twice the speed, six sets of 4+1 would be the same size, (same
number of disks for protection) but isn't quite as safe for its 2x speed.

Rob

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: marvell88sx error in command 0x2f: status 0x51

2007-06-21 Thread Rob Logan

 [hourly] marvell88sx error in command 0x2f: status 0x51

ah, its some kinda SMART or FMA query that

model WDC WD3200JD-00KLB0
firmware 08.05J08
serial number  WD-WCAMR2427571
supported features:
 48-bit LBA, DMA, SMART, SMART self-test
SATA1 compatible
capacity = 625142448 sectors

drives do not support but

model ST3750640AS
firmware 3.AAK
serial number 5QD02ES6
supported features:
 48-bit LBA, DMA, Native Command Queueing, SMART, SMART self-test
SATA1 compatible
queue depth 32
capacity = 1465149168 sectors

do...

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] marvell88sx error in command 0x2f: status 0x51

2007-06-19 Thread Rob Logan

with no seen effects `dmesg` reports lots of
kern.warning] WARNING: marvell88sx1: port 3: error in command 0x2f: status 0x51
found in snv_62 and opensol-b66 perhaps
http://bugs.opensolaris.org/view_bug.do?bug_id=6539787

can someone post part of the headers even if the code is closed?

Rob
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: ZFS Apple WWDC Keynote Absence

2007-06-12 Thread Rob Logan


we know time machine requires an extra disk (local or remote) so its
reasonable to guess the non bootable time machine disk could use zfs.

someone with a Leopard dvd (Rick Mann) could answer this...
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Holding disks for home servers

2007-06-07 Thread Rob Logan


On the third upgrade of the home nas, I chose
http://www.addonics.com/products/raid_system/ae4rcs35nsa.asp to hold the
disks. each hold 5 disks, in the space of three slots and 4 fit into a
http://www.google.com/search?q=stacker+810 case for a total of 20
disks.

But if given a chance to go back in time, the
http://www.supermicro.com/products/accessories/mobilerack/CSE-M35TQ.cfm
has LEDs next to the drive, and doesn't vibrate as much.

photos in http://rob.com/sun/zfs/

Rob
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: Deterioration with zfs performance and recent zfs bits?

2007-06-01 Thread Rob Logan


 Patching zfs_prefetch_disable = 1 has helped

It's my belief this mainly aids scanning metadata. my
testing with rsync and yours with find (and seen with
du  ; zpool iostat -v 1 ) pans this out..
mainly tracked in bug 6437054 vdev_cache: wise up or die
http://www.opensolaris.org/jive/thread.jspa?messageID=42212

so to link your code, it might help, but if one ran
a clean down the tree, it would hurt compile times.

Rob
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: zfs boot image conversion kit is posted

2007-05-01 Thread Rob Logan

 sits there for a second, then boot loops and comes back to the grub menu.

I noticed this too when I was playing... using
kernel$ /platform/i86pc/kernel/$ISADIR/unix -v -B $ZFS-BOOTFS
I could see vmunix loading, but it quickly NMIed around the
rootnex: [ID 349649 kern.notice] isa0 at root
point... changing bootfs root/snv_62 to bootfs rootpool/snv_62
and rebuilding the pool EXACTLY the same way fixed it.

try changing dataset mypool to dataset rootpool...
and I bet it will work..

Rob
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] rootpool notes

2007-04-24 Thread Rob Logan


updating my notes with Lori's rootpool notes found in
http://www.opensolaris.org/os/community/zfs/boot/zfsboot-manual/ using
the Solaris Express: Community Release DVD (no asserts like bfu code) from
http://www.opensolaris.org/os/downloads/on/ and installing the Solaris
Express (second option, as pkgs are customizable) making three partitions
(/, swap, /usr (no /export)) on a single disk, then replaced the dvd drive
with a second disk:  (I'll learn jumpstart someday)

format ; echo copy the partition table from the first disk to the second
zpool create -f z c0d1s6 ; echo /usr partition slice on the empty disk
zfs create z/usr
zfs create z/opt
zfs create z/var
zfs set compression=lzjb z/usr
zfs set compression=lzjb z/opt
zfs set compression=lzjb z/var
cd /z ; ufsdump 0fs - 99 /| ufsrestore -xf - var opt
cd /z/usr ; ufsdump 0fs - 99 /usr | ufsrestore -rf -
zfs set mountpoint=legacy z/usr
zfs set mountpoint=legacy z/opt
zfs set mountpoint=legacy z/var
cp /etc/vfstab /etc/vfstab.bak ; grep -v usr /etc/vfstab.bak  /etc/vfstab
echo /dev/dsk/c0d1s1   -   -   swap-no  -  /etc/vfstab
echo z/usr -   /usrzfs -yes -  /etc/vfstab
echo z/var -   /varzfs -yes -  /etc/vfstab
echo z/opt -   /optzfs -yes -  /etc/vfstab
cd / ; mkdir opt.old var.old
mv /opt/* /opt.old ; mv /var/* /var.old
reboot
rm -rf /opt.old /var.old
zpool add -f z c0d0s6 ; echo the old /usr partition
zpool create -f rootpool c0d1s0 ; echo / from the empty disk
zfs create rootpool/snv_62
zpool set bootfs=rootpool/snv_62 rootpool
zfs set mountpoint=legacy rootpool/snv_62
mkdir /a ; mount -F zfs rootpool/snv_62 /a
cd/a ; ufsdump 0fs - 99 / | ufsrestore -rf -
grep -v ufs /etc/vfstab  /a/etc/vfstab
echo rootpool/snv_62 -  /  zfs -no  -  /a/etc/vfstab
echo rootpool/snv_62 -  /a zfs -yes -/etc/vfstab
echo etc/zfs/zpool.cache  /a/boot/solaris/filelist.ramdisk
/usr/sbin/bootadm update-archive -R /a
installgrub /a/boot/grub/stage1 /a/boot/grub/stage2 /dev/rdsk/c0d1s0
mkdir -p /rootpool/boot/grub
sed s/default\ 0/default\ 2/ /boot/grub/menu.lst  /rootpool/boot/grub/menu.lst
cat  EOF  /rootpool/boot/grub/menu.lst

title Solaris ZFS
#root (hd1,0,a)
#bootfs root/snv_62
kernel$ /platform/i86pc/kernel/$ISADIR/unix -v -B $ZFS-BOOTFS
module$ /platform/i86pc/$ISADIR/boot_archive

title Solaris ZFS failsafe
kernel /boot/platform/i86pc/kernel/unix -v -B
module /boot/x86.miniroot-safe

# (hdx,x,x) where the first entry in the tuple is the disk identifier,
# second entry is the partition number (0-3), and the third entry is the
# slice number (a-h), where a is slice 0 and h is slice 7.

EOF

reboot
zpool attach -f rootpool c0d1s0 c0d0s0

  failsafe boot recovery:
mount -o remount,rw /
zpool import -f rootpool
zpool import -f z
mount -F zfs rootpool/snv_62 /a
bootadm update-archive -R /a
reboot

  other notes:
tried and failed to rename rootpool to root
svccfg -s bind setprop config/local_only = false
svcadm enable rstat

thanks SUNW guys/gals, you rock!
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] update on zfs boot support

2007-03-17 Thread Rob Logan


I'm sure its not blessed, but another process to maximize the zfs space
on a system with few disks is

1) boot from SXCR http://www.opensolaris.org/os/downloads/on/
2) select min install with

512M /
512M swap
rest /export/home

use format to copy the partition table from disk0 to disk1
umount /export/home
zpool create -f zfs c1t0d0s7 c1t1d0s7
zfs create zfs/usr
zfs create zfs/var
zfs create zfs/opt
cd /zfs
ufsdump 0fs - 99 /usr /var | ufsrestore -rf -
mkdir var/run
zfs set mountpoint=legacy zfs/usr
zfs set mountpoint=legacy zfs/var
zfs set mountpoint=legacy zfs/opt
vi /etc/fstab ; echo adding these lines:
  /dev/dsk/c1t0d0s0 /dev/rdsk/c1t0d0s0/   ufs 1   no  -
  /dev/dsk/c1t0d0s1   -   -   swap-   no  -
  /dev/dsk/c1t1d0s1   -   -   swap-   no  -
  zfs/usr -   /usrzfs -   yes -
  zfs/var -   /varzfs -   yes -
  zfs/opt -   /optzfs -   yes -
cd /
bootadm update-archive
mkdir nukeme
mv var/* nukeme
mv usr/* nukeme
power cycle as there is no reboot :-)
rm -rf /nukeme

note there isn't enough space for a crashdump but there
is space for a backup of root on c1t1d0s0

if you want bfu from here to get the slower debug bits
but an easy way to get /usr/ucb

pkgadd SUNWadmc SUNWtoo SUNWpool SUNWzoner SUNWzoneu
pkgadd SUNWbind SUNWbindr SUNWluu SUNWadmfw SUNWlur SUNWluzone
echo set kmem_flags = 0x0  /etc/system
touch /usr/lib/dbus-daemon
chmod 755 /usr/lib/dbus-daemon
grab build-tools and on-bfu from
  http://dlc.sun.com/osol/on/downloads/current/
vi /opt/onbld/bin/bfu
  to remove the fastfs depend and path it out as /opt/onbld/bin/`uname 
-p`/fastfs
  and change the remote acr to /opt/onbld/bin/acr
vi /opt/onbld/bin/acr
  path out /usr/bin/gzip

I've been fighting an issue that after an hr I can ping the default
router but packets never get forward to the default route.. fails
with either e1000g0 or bge0 and an ifconfig down ; ifconfig up
fixes it for another hr or so.
http://bugs.opensolaris.org/view_bug.do?bug_id=6523767
in opensol-20070312 didn't fix it either. sigh..

Rob

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: Re: How much do we really want zpool remove?

2007-02-27 Thread Rob Logan

 With modern journalling filesystems, I've never had to fsck anything or
 run a filesystem repair. Ever.  On any of my SAN stuff.

you will.. even if the SAN is perfect, you will hit
bugs in the filesystem code.. from lots of rsync hard
links or like this one from raidtools last week:

Feb  9 05:38:39 orbit kernel: mptbase: ioc2: IOCStatus(0x0043): SCSI Device Not 
There
Feb  9 05:38:39 orbit kernel: md: write_disk_sb failed for device sdp1
Feb  9 05:38:39 orbit kernel: md: errors occurred during superblock update, 
repeating

Feb  9 05:39:01 orbit kernel: raid6: Disk failure on sdp1, disabling device. 
Operation continuing on 13 devices
Feb  9 05:39:09 orbit kernel: mptscsi: ioc2: attempting task abort! 
(sc=cb17c800)
Feb  9 05:39:10 orbit kernel: RAID6 conf printout:
Feb  9 05:39:10 orbit kernel:  --- rd:14 wd:13 fd:1

Feb  9 05:44:37 orbit kernel: EXT3-fs error (device dm-0): ext3_readdir: bad 
entry in directory #10484: rec_len %$
Feb  9 05:44:37 orbit kernel: Aborting journal on device dm-0.
Feb  9 05:44:37 orbit kernel: ext3_abort called.
Feb  9 05:44:37 orbit kernel: EXT3-fs error (device dm-0): 
ext3_journal_start_sb: Detected aborted journal
Feb  9 05:44:37 orbit kernel: Remounting filesystem read-only
Feb  9 05:44:37 orbit kernel: attempt to access beyond end of device
Feb  9 05:44:44 orbit kernel: oom-killer: gfp_mask=0xd0
death and crupt fs


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] page rates

2007-02-26 Thread Rob Logan


This is a lightly loaded v20z but it has zfs across its two disks..
its hung (requiring a power cycle) twice since running
5.11 opensol-20060904

the last time I had a `vmstat 1` running... nice page rates
right before death :-)


kthr  memorypagedisk  faults  cpu
 r b w   swap  free  re  mf pi po fr de sr f0 m0 s0 s1   in   sy   cs us sy id
 0 0 0 1788296 50360  0   3  4  0  0  0  0  0  1  0 63 1962  273 5435  0  2 97
 0 0 0 1788296 50428  0   3  0  0  0  0  0  0  0  0  0 1075  323  446  1  1 98
 0 0 0 1788296 50504  0   3  4  0  0  0  0  0  1  0  0  929  435  385  0  1 99
 0 0 0 1788296 50432  0   2  0  0  0  0  0  0  0  0  0  921  472  517  1  0 99
 0 0 0 1788296 50436  0   1  0  0  0  0  0  0  0  0  0  900  426  361  0  1 98
 20 0 0 1788296 5 0   3  0  0  0  0  0  0  0  0  9  864  230  236  0 49 51
 0 0 0 1788296 50272  0   5  8  0  0  0  0  0  0  0 45 2460  373 4372  1 20 79
 0 0 0 1788296 50252  2   5  0  0  0  0  0  0  0  0  0 1066  467  469  1  1 98
 0 0 0 1788040 50304  0  29  0  0  0  0  0  0  0  0  0  931  355  353  0  1 99
 0 0 0 1788040 50120  0   0  0  0  0  0  0  0  0  0  0  693  348  426  0  1 99
 0 0 0 1787784 49972  0  83  4  0  0  0  0  0  1  0  0 1048  376  476  0  1 98
 0 0 0 1787784 49836  0  17  0  0  0  0  0  0  0  0 39 1676  377 4254  0  2 98
 0 0 0 1787528 49760  2  18  0  0  0  0  0  0  0  0  2  790  313  462  0  0 99
 0 0 0 1787528 49748  1  26  0  0  0  0  0  0  0  0  0  833  432  513  1  1 98
 0 0 0 1787528 49444  0  66  0  0  0  0  0  0  0  0  0 1176  467  563  0  2 98
 0 0 0 1787016 49148  0  87  0  0  0  0  0  0  0  0  0 1239  422  460  0  1 99
 0 0 0 1786760 49128  0  27  0 252 256 0 1437 0 5 0 89 2005  386 5124  1  5 94
 0 0 0 1786760 49148  0  87  0 214 214 0 1900 0 4 0  0 1136  490  505  1  1 98
 0 0 0 1786504 49148  0  32  0 166 166 0 952 0 3  0  0  789  336  397  0  1 99
 kthr  memorypagedisk  faults  cpu
 r b w   swap  free  re  mf pi po fr de sr f0 m0 s0 s1   in   sy   cs us sy id
 0 0 0 1786504 49148  0   1  0  0  0  0  0  0  0  0  0  675  263  405 28  1 71
 0 0 0 1786504 49148  0   0  0  0  0  0  0  0  0  0 40  845  221 2499 50  1 49
 0 0 0 1786504 49148  1  50  0 333 337 0 2762 0 6 0 51 1584  479 4976 43  4 53
 0 0 0 1739020 49092  0   4  0 1935 1978 0 13663 0 36 0 98 9032 1183 28470 1 11 
87
 0 0 0 1738764 49796  0  92  0 8264 8724 0 80894 0 152 0 188 13118 1118 36769 0 
14 86
 0 0 0 1738764 49368  0   0  0 4198 4364 0 34772 0 77 0 135 10556 1592 31932 0 
12 87
 1 0 0 1738764 47252 19  28 32 5395 5675 0 56668 0 105 0 150 11534 1135 34519 1 
17 82
 0 0 0 1738764 49032  3  17  4 6933 7399 0 173424 0 129 0 339 16913 1046 40602 
0 24 75
 0 0 0 1738764 52536 17  23 43 4717 4812 0 32283 0 93 0 265 10603 3460 31909 1 
20 79
 0 0 0 1738764 48520 70  81 28 4820 5110 0 64402 0 96 0 57 4233 12106 16099 2 
28 69
 0 0 0 1738764 48556 257 290 134 3838 3965 0 31038 0 104 0 55 5518 8696 19797 6 
22 72
 0 0 0 1738764 48868 118 137 36 5349 5500 0 43605 0 109 0 123 12934 786 32379 7 
14 79
 0 0 0 1738764 50016 196 258 115 3909 4063 0 39666 0 104 0 292 18954 873 49225 
1 20 79
 0 0 0 1738764 49380 27  38 20 8906 9125 0 77911 0 172 0 144 17529 856 41523 1 
16 83
 1 0 0 1738764 48120  3  56 16 5288 5296 0 80894 0 103 0 131 13210 2877 39454 0 
19 80
 0 0 0 1738508 48992 41  71 24 7543 7606 0 80807 0 147 0 161 18161 1453 55461 0 
20 79
 1 0 0 1738508 47964 24  55  8 7873 8079 0 72299 0 151 0 199 11502 1815 51196 1 
20 79
 0 0 0 1738508 49316 128 208 206 6420 6780 0 92682 0 178 0 170 12950 1692 46450 
1 19 80
 0 0 0 1738252 48916 84 204 259 7582 7976 0 125476 0 201 0 106 8189 3275 51597 
1 24 75
 kthr  memorypagedisk  faults  cpu
 r b w   swap  free  re  mf pi po fr de sr f0 m0 s0 s1   in   sy   cs us sy id
 0 0 0 1737996 51224 54 222 670 3283 3283 0 0 0 229 0 39 3070 6590 11314 1 15 84
 0 0 0 1737996 49616 128 347 855 3160 3410 0 44589 0 270 0 50 4663 5857 10050 2 
18 80
 0 0 0 1737996 52088 64 155 318 0  0  0  0  0 78  0 163 7183 3732 42740 3 19 78
 0 0 0 1737996 50912 186 289 390 3038 3633 0 79700 0 155 0 159 15334 2993 35202 
2 17 81
 0 0 0 1737996 49272 276 459 724 1120 1508 0 85377 0 195 0 36 3901 11036 10635 
2 28 70
 0 0 0 1737996 48100 49 178 506 6678 6933 0 288123 0 253 0 16 2761 12678 7663 3 
37 59
 0 0 0 1737996 46004 240 446 808 11869 14016 0 560661 0 420 0 7 2597 8195 3651 
2 26 72
 0 0 0 1737996 47200 338 527 732 3268 3419 0 243514 0 255 0 106 7373 1339 17765 
0 13 87
 0 0 0 1737996 44468 172 344 654 2187 2542 0 308709 0 207 0 257 15433 750 43955 
0 24 75
 0 0 0 1737996 46464 108 329 866 2419 3188 0 383872 0 276 0 106 11104 2767 
29523 1 18 81
 0 0 0 1737996 43472 468 859 1574 2305 3138 0 603123 0 432 0 25 4609 1526 11965 
1 16 83
 0 0 0 1737996 39192 804 1215 1574 4183 5446 0 1069089 0 481 0 26 3628 1930 
8346 1 22 77
 0 0 0 1737996 41196 589 1007 1647 4338 6138 0 882406 0 456 0 115 4310 1449 
11585 1 17 82
 0 0 0 1737996 41508 458 951 1900 2299 

Re: [zfs-discuss] ZFS and HDLM 5.8 ... does that coexist well ? [MD21]

2007-01-23 Thread Rob Logan

 FWIW, the Micropolis 1355 is a 141 MByte (!) ESDI disk.
 The MD21 is an ESDI to SCSI converter.

yup... its the board in the middle left of
http://rob.com/sun/sun2/md21.jpg

Rob
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zpool import core

2006-11-22 Thread Rob Logan


did a `zpool export zfs ; zpool import zfs` and got a core.

core file = core.import -- program ``/sbin/zpool'' on platform i86pc
SIGSEGV: Segmentation Fault
$c
libzfs.so.1`zfs_prop_get+0x24(0, d, 80433f0, 400, 0, 0)
libzfs.so.1`dataset_compare+0x39(80d5fd0, 80d5fe0)
libc.so.1`qsort+0x39d(80d5fd0, 8, 4, bff5c2eb)
libzfs.so.1`zpool_mount_datasets+0xb6(8106dc8, 0, 0)
do_import+0x15e(80bcfa8, 0, 0, 0, 0)
zpool_do_import+0x542(2, 8047ec8)
main+0xc7(3, 8047ec0, 8047ed0)
_start+0x7a(3, 8047f60, 8047f66, 8047f6d, 0, 8047f71)

but it looks like it worked fine??

zfs
version=3
name='zfs'
state=0
txg=3087327
pool_guid=6880133271152381123
vdev_tree
type='root'
id=0
guid=6880133271152381123
children[0]
type='raidz'
id=0
guid=4634527950873853841
nparity=1
metaslab_array=13
metaslab_shift=34
ashift=9
asize=2560524025856
children[0]
type='disk'
id=0
guid=3041226112753412919
path='/dev/dsk/c2d0p0'
devid='id1,[EMAIL PROTECTED]/q'
whole_disk=0
DTL=159
children[1]
type='disk'
id=1
guid=276488990546593385
path='/dev/dsk/c4d0p0'
devid='id1,[EMAIL PROTECTED]/q'
whole_disk=0
DTL=158
children[2]
type='disk'
id=2
guid=15159367539641518981
path='/dev/dsk/c6d0p0'
devid='id1,[EMAIL PROTECTED]/q'
whole_disk=0
DTL=157
children[3]
type='disk'
id=3
guid=12468347267659265830
path='/dev/dsk/c8d0p0'
devid='id1,[EMAIL PROTECTED]/q'
whole_disk=0
DTL=156
children[4]
type='disk'
id=4
guid=324811749614235294
path='/dev/dsk/c3d0p0'
devid='id1,[EMAIL PROTECTED]/q'
whole_disk=0
DTL=155
children[5]
type='disk'
id=5
guid=6553742652717577755
path='/dev/dsk/c5d0p0'
devid='id1,[EMAIL PROTECTED]/q'
whole_disk=0
DTL=154
children[6]
type='disk'
id=6
guid=12261308694925453580
path='/dev/dsk/c7d0p0'
devid='id1,[EMAIL PROTECTED]/q'
whole_disk=0
DTL=153
children[7]
type='disk'
id=7
guid=18331492026913838936
path='/dev/dsk/c9d0p0'
devid='id1,[EMAIL PROTECTED]/q'
whole_disk=0
DTL=152

this is with amd64 opensol-b47... yea, I know I need to get over
the Tamarack Flag Day.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] unaccounted for daily growth in ZFS disk space usage

2006-08-26 Thread Rob Logan


 For various reasons, I can't post the zfs list type

here is one, and it seems inline with expected netapp(tm)
type usage considering the cluster size differences.

14 % cat snap_sched
#!/bin/sh

snaps=15

for fs in `echo Videos Movies Music users local`
do
  i=$snaps
  zfs destroy zfs/[EMAIL PROTECTED]
  while [ $i -gt 1 ] ; do
i=`expr $i - 1`
zfs rename zfs/[EMAIL PROTECTED] zfs/[EMAIL PROTECTED] $i + 1`
  done
  zfs snapshot zfs/[EMAIL PROTECTED]
done

day=`date +%j`
nuke=`expr $day - 181`

if [ $nuke -lt 0 ] ; then
  nuke=`expr 365 + $nuke`
fi

zfs destroy  zfs/[EMAIL PROTECTED]
zfs snapshot zfs/[EMAIL PROTECTED]
zfs list -H zfs

15 % zfs list
NAME   USED  AVAIL  REFER  MOUNTPOINT
zfs   1.71T   592G75K  /zfs
[EMAIL PROTECTED]57K  -69K  -
zfs/Movies1.36T   592G  1.36T  /zfs/Movies
zfs/[EMAIL PROTECTED]  238K  -  1.25T  -
zfs/[EMAIL PROTECTED]   62K  -  1.27T  -
zfs/[EMAIL PROTECTED]   66K  -  1.27T  -
zfs/[EMAIL PROTECTED]   56K  -  1.27T  -
zfs/[EMAIL PROTECTED]   48K  -  1.27T  -
zfs/[EMAIL PROTECTED]   48K  -  1.27T  -
zfs/[EMAIL PROTECTED]   132K  -  1.30T  -
zfs/[EMAIL PROTECTED]  0  -  1.33T  -
zfs/[EMAIL PROTECTED]  0  -  1.33T  -
zfs/[EMAIL PROTECTED]   188K  -  1.33T  -
zfs/[EMAIL PROTECTED]   178K  -  1.33T  -
zfs/[EMAIL PROTECTED]  0  -  1.35T  -
zfs/[EMAIL PROTECTED]  0  -  1.35T  -
zfs/[EMAIL PROTECTED]  0  -  1.35T  -
zfs/[EMAIL PROTECTED]   154K  -  1.36T  -
zfs/Music 6.96G   592G  6.96G  /zfs/Music
zfs/[EMAIL PROTECTED]  0  -  6.96G  -
zfs/[EMAIL PROTECTED]  0  -  6.96G  -
zfs/[EMAIL PROTECTED]  0  -  6.96G  -
zfs/[EMAIL PROTECTED]  0  -  6.96G  -
zfs/[EMAIL PROTECTED]  0  -  6.96G  -
zfs/[EMAIL PROTECTED]  0  -  6.96G  -
zfs/[EMAIL PROTECTED]   0  -  6.96G  -
zfs/[EMAIL PROTECTED]   0  -  6.96G  -
zfs/[EMAIL PROTECTED]   0  -  6.96G  -
zfs/[EMAIL PROTECTED]   0  -  6.96G  -
zfs/[EMAIL PROTECTED] 45K  -  6.96G  -
zfs/[EMAIL PROTECTED]   0  -  6.96G  -
zfs/[EMAIL PROTECTED]   0  -  6.96G  -
zfs/[EMAIL PROTECTED]   0  -  6.96G  -
zfs/[EMAIL PROTECTED]   0  -  6.96G  -
zfs/Videos 157G   592G   157G  /zfs/Videos
zfs/[EMAIL PROTECTED] 0  -   156G  -
zfs/[EMAIL PROTECTED] 0  -   156G  -
zfs/[EMAIL PROTECTED]   50K  -   156G  -
zfs/[EMAIL PROTECTED] 0  -   156G  -
zfs/[EMAIL PROTECTED] 0  -   156G  -
zfs/[EMAIL PROTECTED] 0  -   156G  -
zfs/[EMAIL PROTECTED]   146K  -   157G  -
zfs/[EMAIL PROTECTED]  0  -   157G  -
zfs/[EMAIL PROTECTED]  0  -   157G  -
zfs/[EMAIL PROTECTED]  0  -   157G  -
zfs/[EMAIL PROTECTED]54K  -   157G  -
zfs/[EMAIL PROTECTED]  0  -   157G  -
zfs/[EMAIL PROTECTED]  0  -   157G  -
zfs/[EMAIL PROTECTED]  0  -   157G  -
zfs/[EMAIL PROTECTED]  0  -   157G  -
zfs/backup 172G   592G   131G  /zfs/backup
zfs/[EMAIL PROTECTED] 341M  -   140G  -
zfs/[EMAIL PROTECTED] 295M  -   140G  -
zfs/[EMAIL PROTECTED] 265M  -   140G  -
zfs/[EMAIL PROTECTED] 236M  -   140G  -
zfs/[EMAIL PROTECTED] 247M  -   140G  -
zfs/[EMAIL PROTECTED] 288M  -   140G  -
zfs/[EMAIL PROTECTED] 251M  -   140G  -
zfs/[EMAIL PROTECTED] 268M  -   141G  -
zfs/[EMAIL PROTECTED] 260M  -   141G  -
zfs/[EMAIL PROTECTED] 201M  -   141G  -
zfs/[EMAIL PROTECTED] 284M  -   141G  -
zfs/[EMAIL PROTECTED] 316M  -   141G  -
zfs/[EMAIL PROTECTED] 309M  -   141G  -
zfs/[EMAIL PROTECTED] 289M  -   141G  -
zfs/[EMAIL PROTECTED] 252M  -   141G  -
zfs/[EMAIL PROTECTED] 269M  -   141G  -
zfs/[EMAIL PROTECTED] 268M  -   141G  -
zfs/[EMAIL PROTECTED] 220M  -   141G  -
zfs/[EMAIL PROTECTED] 241M  -   141G  -
zfs/[EMAIL PROTECTED] 242M  -   141G  -
zfs/[EMAIL PROTECTED]11.9M  -   141G  -
zfs/[EMAIL PROTECTED]9.59M  -   141G  -
zfs/[EMAIL PROTECTED] 266M  -   142G  -
zfs/[EMAIL PROTECTED] 241M  -   142G  -
zfs/[EMAIL PROTECTED] 259M  -   142G  -
zfs/[EMAIL PROTECTED] 274M  -   143G  -
zfs/[EMAIL PROTECTED] 254M  -   141G  -
zfs/[EMAIL PROTECTED] 257M  -   141G  -
zfs/[EMAIL PROTECTED] 261M  -   141G  -
zfs/[EMAIL PROTECTED] 

Re: [zfs-discuss] Expanding raidz2 [Infrant]

2006-07-13 Thread Rob Logan


Infrant NAS box and using their X-RAID instead.  

I've gone back to solaris from an Infrant box.

1) while the Infrant cpu is sparc, its way, way, slow.
  a) the web IU takes 3-5 seconds per page
  b) any local process, rsync, UPnP, SlimServer is cpu starved
2) like a netapp, its frustrating to not have shell access
3) NFSv3 is buggy (use NFSv2)
  a) http://www.infrant.com/forum/viewtopic.php?t=546
  b) NFSv2 works, but its max filesize is 2Gig.
4) 8MB/sec writes and 15MB/sec reads isn't that fast
5) local rsync writes are 2MB/sec (use NFS instead)

put solaris on your our old PPro box.  It will be faster (yes!), cheaper
and you can do more than one snapshot  (and it doesn't kill the system)
plus one gets shell access!
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: Expanding raidz2

2006-07-13 Thread Rob Logan



comfortable with having 2 parity drives for 12 disks,


the thread starting config of 4 disks per controller(?):
zpool create tank raidz2 c1t1d0 c1t2d0 c1t3d0 c1t4d0c2t1d0 c2t2d0

then later
zpool add tank raidz2 c2t3d0 c2t4d0 c3t1d0 c3t2d0 c3t3d0 c3t4d0

as described, doubles ones IOPs, and usable space in tank, with the loss
of another two disks, splitting the cluster into four (and two parity)
writes per disk.  perhaps a 8 disk controller, and start with

zpool create tank raidz c1t1d0 c1t2d0 c1t3d0 c1t4d0 c1t5d0

then do a
zpool add tank raidz c1t6d0 c1t7d0 c1t8d0 c2t1d0 c2t2d0
zpool add tank raidz c2t3d0 c2t4d0 c2t5d0 c2t6d0 c2t7d0
zpool add tank spare c2t8d0

gives one the same largeish cluster size div 4 per raidz disk, 3x the
IOPs, less parity math per write, and a hot spare for the same usable
space and loss of 4 disks.

splitting the max 128k cluster into 12 chunks (+2 parity) makes good MTTR
sense but not much performance sense.  if someone wants to do the MTTR
math between all three configs, I'd love to read it.

Rob

http://storageadvisors.adaptec.com/2005/11/02/actual-reliability-calculations-for-raid/
http://www.barringer1.com/ar.htm
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: Thumper on (next) Tuesday?

2006-07-11 Thread Rob Logan

 Well, glue a beard on me and call me Nostradamus :

http://www.sun.com/servers/x64/x4500/arch-wp.pdf
http://www.cooldrives.com/8-channel-8-port-sata-pci-card.html
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: x86 CPU Choice for ZFS

2006-07-06 Thread Rob Logan


with ZFS the primary driver isn't cpu, its how many drives can
one attach :-)  I use a 8 sata and 2 pata port
http://supermicro.com/Aplus/motherboard/Opteron/nForce/H8DCE.cfm
But there was a v20z I could steal registered ram and cpus from.
H8DCE can't use the SATA HBA Framework which only supports Marvell 88SX
and SI3124 controllers, so perhaps a 10 sata and 2 pata (14 drives!)
http://www.amdboard.com/abit_sv-1a.html would be a better choice.

Rob

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: opensol-20060605 # zpool iostat -v 1

2006-06-11 Thread Rob Logan

 a total of 4*64k = 256k to fetch a 2k block.
 http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6437054

perhaps a quick win would be to tell vdev_cache
about the DMU_OT_* type so it can read ahead appropriately.
it seems the largest losses are metadata. (du,find,scrub/resilver)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ata panic [fixed]

2006-05-29 Thread Rob Logan


int sharing issues in bios 1.0c,

  [XSDT] v1 OEM ID [A M I ] OEM TABLE ID [OEMXSDT ] OEM rev 9000514
  [FACP] v3 OEM ID [A M I ] OEM TABLE ID [OEMFACP ] OEM rev 9000514
  [DSDT](id5) - 859 Objects with 89 Devices 277 Methods 17 Regions
  [APIC] v1 OEM ID [A M I ] OEM TABLE ID [OEMAPIC ] OEM rev 9000514

works correctly with bios 1.1a

  [XSDT] v1 OEM ID [A M I ] OEM TABLE ID [OEMXSDT ] OEM rev 4000607
  [FACP] v3 OEM ID [A M I ] OEM TABLE ID [OEMFACP ] OEM rev 4000607
  [DSDT](id5) - 872 Objects with 89 Devices 277 Methods 17 Regions
  [APIC] v1 OEM ID [A M I ] OEM TABLE ID [OEMAPIC ] OEM rev 4000607

found via source trail in:

 nge1: automatic recovery activated

fixed in:

 nge1: ddi_intr_get_supported_types() returned: 1
 nge1: nge_add_intrs: interrupt type 0x1
 nge1: Using FIXED interrupt type

still might turn off

 audio8100: audio8100: xid=0x05c7, vid1=0x414c, vid2=0x4720
 IRQ21 is being shared by drivers with different interrupt levels.

zfs group: wow, your stuff is way cool...


It's curious that your drive is not using DMA.  Please append ::msgbuf
output and if you can provide access to the core that would be even
better.


On Fri, 2006-05-26 at 18:55 -0400, Rob Logan wrote:

 `mv`ing files from a zfs dir to another zfs filesystem
 in the same pool will panic a 8 sata zraid
 http://supermicro.com/Aplus/motherboard/Opteron/nForce/H8DCE.cfm
 system with
 
 ::status

 debugging crash dump vmcore.3 (64-bit) from zfs
 operating system: 5.11 opensol-20060523 (i86pc)
 panic message:
 assertion failed: !(status  0x80), file: 
../../intel/io/dktp/controller/ata/ata
 _disk.c, line: 2212
 dump content: kernel pages only
 
 ::stack

 vpanic()
 assfail+0x83(f3afb508, f3afb4d8, 8a4)
 ata_disk_intr_pio_out+0x1dd(8f51b840, 84ff5440, 
911a8d50)
 ata_ctlr_fsm+0x237(2, 8f51b840, 0, 0, 0)
 ata_process_intr+0x3e(8f51b840, fe8b3be4)
 ghd_intr+0x72(8f51b958, fe8b3be4)
 ata_intr+0x25(8f51b840)
 av_dispatch_autovect+0x97(2d)
 intr_thread+0x50()





___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ata panic

2006-05-26 Thread Rob Logan


`mv`ing files from a zfs dir to another zfs filesystem
in the same pool will panic a 8 sata zraid
http://supermicro.com/Aplus/motherboard/Opteron/nForce/H8DCE.cfm
system with

::status
debugging crash dump vmcore.3 (64-bit) from zfs
operating system: 5.11 opensol-20060523 (i86pc)
panic message:
assertion failed: !(status  0x80), file: ../../intel/io/dktp/controller/ata/ata
_disk.c, line: 2212
dump content: kernel pages only

::stack
vpanic()
assfail+0x83(f3afb508, f3afb4d8, 8a4)
ata_disk_intr_pio_out+0x1dd(8f51b840, 84ff5440, 
911a8d50)
ata_ctlr_fsm+0x237(2, 8f51b840, 0, 0, 0)
ata_process_intr+0x3e(8f51b840, fe8b3be4)
ghd_intr+0x72(8f51b958, fe8b3be4)
ata_intr+0x25(8f51b840)
av_dispatch_autovect+0x97(2d)
intr_thread+0x50()

every time...
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss