Re: [zfs-discuss] what have you been buying for slog and l2arc?

2012-08-06 Thread Brandon High
. If that -- ignoring cache flush requests -- is the whole reason why SSDs are so fast, I'm glad I haven't got one yet. They're fast for random reads and writes because they don't have seek latency. They're fast for sequential IO because they aren't limited by spindle speed. -- Brandon High : bh

Re: [zfs-discuss] Can the ZFS copies attribute substitute HW disk redundancy?

2012-07-30 Thread Brandon High
On Mon, Jul 30, 2012 at 7:11 AM, GREGG WONDERLY gregg...@gmail.com wrote: I thought I understood that copies would not be on the same disk, I guess I need to go read up on this again. ZFS attempts to put copies on separate devices, but there's no guarantee. -B -- Brandon High : bh

Re: [zfs-discuss] Persistent errors?

2012-06-22 Thread Brandon High
-eV', it should have some (rather extensive) information. -B -- Brandon High : bh...@freaks.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Migration of a Thumper to bigger HDDs

2012-05-24 Thread Brandon High
fixes and new features added between snv_117 and snv_134 (the last OpenSolaris release). It might be worth updating to snv_134 at the very least. -B -- Brandon High : bh...@freaks.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http

Re: [zfs-discuss] checking/fixing busy locks for zfs send/receive

2012-03-16 Thread Brandon High
of thing can mean interference between some combination of multiple send/receives at the same time, on the same filesystem? Look at 'zfs hold', 'zfs holds', and 'zfs release'. Sends and receives will place holds on snapshots to prevent them from being changed. -B -- Brandon High : bh

Re: [zfs-discuss] Compatibility of Hitachi Deskstar 7K3000 HDS723030ALA640 with ZFS

2012-03-06 Thread Brandon High
been using 8 x 3TB 5k3000 in a raidz2 for about a year without issue. The Deskstar 3TB come off the same production line as the Ultrastar 5k3000. I would avoid the 2TB and smaler 5k3000 - They come off a separate production line. -B -- Brandon High : bh...@freaks.com

Re: [zfs-discuss] Compatibility of Hitachi Deskstar 7K3000 HDS723030ALA640 with ZFS

2012-03-05 Thread Brandon High
the 7K3000 and 5K3000 drives have 512B physical sectors. -B -- Brandon High : bh...@freaks.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Server upgrade

2012-02-15 Thread Brandon High
it's a somewhat important decision. -B -- Brandon High : bh...@freaks.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] grrr, How to get rid of mis-touched file named `-c'

2011-11-26 Thread Brandon High
it might be done from a shell prompt. rm ./-c ./-O ./-k -- Brandon High : bh...@freaks.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Replacement for X25-E

2011-09-22 Thread Brandon High
as a cache device with the Z68 chipset. -B -- Brandon High : bh...@freaks.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Deskstars and CCTL (aka TLER)

2011-09-22 Thread Brandon High
are not manufactured on the same line as the Ultrastar and seem to have lower reliability. Only the 3TB 5k3000 shares specs with the Ultrastar 5k3000. -B -- Brandon High : bh...@freaks.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http

Re: [zfs-discuss] Replacement for X25-E

2011-09-22 Thread Brandon High
. The 100GB Intel 710 costs ~ $650. The 311 is a good choice for home or budget users, and it seems that the 710 is much bigger than it needs to be for slog devices. -B -- Brandon High : bh...@freaks.com ___ zfs-discuss mailing list zfs-discuss

Re: [zfs-discuss] Deskstars and CCTL (aka TLER)

2011-09-07 Thread Brandon High
it to a startup script. -B -- Brandon High : bh...@freaks.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] ZFS raidz on top of hardware raid0

2011-08-26 Thread Brandon High
times. -B -- Brandon High : bh...@freaks.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Intel 320 as ZIL?

2011-08-15 Thread Brandon High
to 80%). Intel recently added the 311, a small SLC-based drive for use as a temp cache with their Z68 platform. It's limited to 20GB, but it might be a better fit for use as a ZIL than the 320. -B -- Brandon High : bh...@freaks.com ___ zfs-discuss

Re: [zfs-discuss] Disk IDs and DD

2011-08-09 Thread Brandon High
-the-solaris-device-tree -B -- Brandon High : bh...@freaks.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Exapnd ZFS storage.

2011-08-03 Thread Brandon High
. You can create another vdev to add to your pool though. If you're adding another vdev, it should have the same geometry as your current (ie: 4 drives). The zpool command will complain if you try to add a vdev with different geometry or redundancy, though you can force it with -f. -B -- Brandon

Re: [zfs-discuss] ZFS Fragmentation issue - examining the ZIL

2011-08-03 Thread Brandon High
if this was involved here. Using dedup on a pool that houses an Oracle DB is Doing It Wrong in so many ways... -B -- Brandon High : bh...@freaks.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs

Re: [zfs-discuss] ZFS Fragmentation issue - examining the ZIL

2011-08-01 Thread Brandon High
is a real issue with pools that are (or have been) very full. The data gets written out in fragments and has to be read back in the same order. If the mythical bp_rewrite code ever shows up, it will be possible to defrag a pool. But not yet. -B -- Brandon High : bh...@freaks.com

Re: [zfs-discuss] SSD vs hybrid drive - any advice?

2011-07-26 Thread Brandon High
gives hints to the garbage collector that sectors are no longer in use. When the GC runs, it can find more flash blocks more easily that aren't used or combine several mostly-empty blocks and erase or otherwise free them for reuse later. -B -- Brandon High : bh...@freaks.com

Re: [zfs-discuss] recover zpool with a new installation

2011-07-26 Thread Brandon High
the pool. You can also use the Live CD or Live USB to access your pool or possibly fix your existing installation. You will have to force the zpool import with either a reinstall or a Live boot. -B -- Brandon High : bh...@freaks.com ___ zfs-discuss

Re: [zfs-discuss] Large scale performance query

2011-07-25 Thread Brandon High
bandwidth from your main storage pools than from the cache devices. -B -- Brandon High : bh...@freaks.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Replacing failed drive

2011-07-22 Thread Brandon High
the replacement drive. Since you've physically replaced the drive, you should just have to do: # zpool replace tank c10t0d0 The pool should resilver, and I think the spare should automatically detach. If not # zpool remove tank c10t6d0 should take care of it. -B -- Brandon High : bh...@freaks.com

Re: [zfs-discuss] SSD vs hybrid drive - any advice?

2011-07-21 Thread Brandon High
Wildfire, etc). -B -- Brandon High : bh...@freaks.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] latest zpool version in solaris 11 express

2011-07-20 Thread Brandon High
be able to issue new certificates for public products. Please try again later -B -- Brandon High : bh...@freaks.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Zil on multiple usb keys

2011-07-18 Thread Brandon High
. But it would be a really bad idea. -B -- Brandon High : bh...@freaks.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Replacement disks for Sun X4500

2011-07-15 Thread Brandon High
fine. This card uses the same Marvell controller as the x4500. Performance is fine if not slightly better than the WD10EADS drives that I replaced. Of course, the pool was about 92% full with the smaller drives ... -B -- Brandon High : bh...@freaks.com

Re: [zfs-discuss] Pure SSD Pool

2011-07-12 Thread Brandon High
are met. -B -- Brandon High : bh...@freaks.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Pure SSD Pool

2011-07-12 Thread Brandon High
it even once. -B -- Brandon High : bh...@freaks.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Pure SSD Pool

2011-07-11 Thread Brandon High
) should be fine until the volume gets very full. -B -- Brandon High : bh...@freaks.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Cannot format 2.5TB ext disk (EFI)

2011-06-24 Thread Brandon High
tried to use 2TB drives on an Atom N270-based board and they were not recognized, but they worked fine under FreeBSD. -B -- Brandon High : bh...@freaks.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman

Re: [zfs-discuss] JBOD recommendation for ZFS usage

2011-05-30 Thread Brandon High
SAS device, or with SATA drives. A single port cable can be used with a single- or dual-ported SAS device (although it will only use one port) or with a SATA drive. A SATA cable can be used with a SATA device. -B -- Brandon High : bh...@freaks.com

Re: [zfs-discuss] offline dedup

2011-05-26 Thread Brandon High
, and the conclusion is that it would require bp_rewrite. Offline (or deferred) dedup certainly seems more attractive given the current real-time performance. -B -- Brandon High : bh...@freaks.com ___ zfs-discuss mailing list zfs-discuss

Re: [zfs-discuss] optimal layout for 8x 1 TByte SATA (consumer)

2011-05-26 Thread Brandon High
with an 8-drive raidz2, though my usage is fairly light. The system is more than fast enough to saturate gigabit ethernet for sequential reads and writes. My drives were WD10EADS Green drives. -B -- Brandon High : bh...@freaks.com ___ zfs-discuss

Re: [zfs-discuss] ZFS, Oracle and Nexenta

2011-05-24 Thread Brandon High
On Tue, May 24, 2011 at 12:41 PM, Richard Elling richard.ell...@gmail.com wrote: There are many ZFS implementations, each evolving as the contributors desire. Diversity and innovation is a good thing. ... unless Oracle's zpool v30 is different than Nexenta's v30. -B -- Brandon High : bh

Re: [zfs-discuss] ZFS, Oracle and Nexenta

2011-05-24 Thread Brandon High
Richard would probably know for certain. There will probably be a fork at some point to an OSS ZFS and an Oracle ZFS. Hopefully neither side will actively try to break compatibility. -B -- Brandon High : bh...@freaks.com ___ zfs-discuss mailing list zfs

Re: [zfs-discuss] Monitoring disk seeks

2011-05-19 Thread Brandon High
]-b_flags B_WRITE ? W : R), args[0]-b_bcount ); } For every completed IO, this should give you the timestamp, device name, start LBA, Read or Write and length of the IO. -B -- Brandon High : bh...@freaks.com ___ zfs-discuss mailing list

Re: [zfs-discuss] Solaris vs FreeBSD question

2011-05-18 Thread Brandon High
feed it the output of 'lspci -vv -n'. You may have to disable some on-board devices to get through the installer, but I couldn't begin to guess which. -B -- Brandon High : bh...@freaks.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http

Re: [zfs-discuss] Reboots when importing old rpool

2011-05-17 Thread Brandon High
On Tue, May 17, 2011 at 11:10 AM, Hung-ShengTsao (Lao Tsao) Ph.D. laot...@gmail.com wrote: may be do zpool import  -R /a rpool 'zpool import -N' may work as well. -B -- Brandon High : bh...@freaks.com ___ zfs-discuss mailing list zfs-discuss

Re: [zfs-discuss] 350TB+ storage solution

2011-05-16 Thread Brandon High
data disks that were a power of two was still recommended, due to the way that ZFS splits records/blocks in a raidz vdev. Or are you responding to some other point? -B -- Brandon High : bh...@freaks.com ___ zfs-discuss mailing list zfs-discuss

Re: [zfs-discuss] 350TB+ storage solution

2011-05-16 Thread Brandon High
. -B -- Brandon High : bh...@freaks.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] 350TB+ storage solution

2011-05-16 Thread Brandon High
environments. -B -- Brandon High : bh...@freaks.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Still no way to recover a corrupted pool

2011-05-16 Thread Brandon High
. What's most frustrating is that this is the third time I've built this pool due to corruption like this, within three months.  :( You may have an underlying hardware problem, or there could be a bug in the FreeBSD implementation that you're tripping over. -B -- Brandon High : bh

Re: [zfs-discuss] ZFS on HP MDS 600

2011-05-10 Thread Brandon High
for read workloads. -B -- Brandon High : bh...@freaks.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] primarycache=metadata seems to force behaviour of secondarycache=metadata

2011-05-10 Thread Brandon High
.. It wasn't that long ago when 66MB/s ATA was considered a waste because no drive could use that much bandwidth. These days a slow drive has max throughput greater than 110MB/s. (OK, looking at some online reviews, it was about 13 years ago. Maybe I'm just old.) -B -- Brandon High : bh...@freaks.com

Re: [zfs-discuss] Deduplication Memory Requirements

2011-05-06 Thread Brandon High
but sometimes very different implementations. -B -- Brandon High : bh...@freaks.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Quick zfs send -i performance questions

2011-05-05 Thread Brandon High
. This could also be why the full sends perform better than incremental sends. -B -- Brandon High : bh...@freaks.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Deduplication Memory Requirements

2011-05-05 Thread Brandon High
. As with ext4, block alignment is determined by partitioning and slices. -B -- Brandon High : bh...@freaks.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Deduplication Memory Requirements

2011-05-05 Thread Brandon High
smaller. You'll have to worry about the guests' block alignment in the context of the image file, since two identical files may not create identical blocks as seen from ZFS. This means you may get only fractional savings and have an enormous DDT. -B -- Brandon High : bh...@freaks.com

Re: [zfs-discuss] Deduplication Memory Requirements

2011-05-04 Thread Brandon High
limitations, and it sucks when you hit them. -B -- Brandon High : bh...@freaks.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Quick zfs send -i performance questions

2011-05-04 Thread Brandon High
the send is stalled. You will have to fiddle with the buffer size and other options to tune it for your use. -B -- Brandon High : bh...@freaks.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs

Re: [zfs-discuss] Deduplication Memory Requirements

2011-05-04 Thread Brandon High
better off cloning datasets that contain an unconfigured install and customizing from there? -B -- Brandon High : bh...@freaks.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Faster copy from UFS to ZFS

2011-05-03 Thread Brandon High
don't need to specify --whole-file, it's implied when copying on the same system. --inplace can play badly with hard links and shouldn't be used. It probably will be slower than other options but it may be more accurate, especially with -H -B -- Brandon High : bh...@freaks.com

Re: [zfs-discuss] Faster copy from UFS to ZFS

2011-05-03 Thread Brandon High
, since files on both side need to be read and checksummed. -B -- Brandon High : bh...@freaks.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] ls reports incorrect file size

2011-05-02 Thread Brandon High
. NTFS supports sparse files. http://www.flexhex.com/docs/articles/sparse-files.phtml -B -- Brandon High : bh...@freaks.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Dedup and L2ARC memory requirements (again)

2011-04-30 Thread Brandon High
you can do about it short of deleting datasets and/or snapshots. -B -- Brandon High : bh...@freaks.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Dedup and L2ARC memory requirements (again)

2011-04-29 Thread Brandon High
for non-dedup datasets, and is in fact the default. As an aside: Erik, any idea when the 159 bits will make it to the public? -B -- Brandon High : bh...@freaks.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org

Re: [zfs-discuss] Faster copy from UFS to ZFS

2011-04-29 Thread Brandon High
. You will probably want to set it back to default after you're done. -B -- Brandon High : bh...@freaks.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Still no way to recover a corrupted pool

2011-04-29 Thread Brandon High
On Fri, Apr 29, 2011 at 1:23 PM, Freddie Cash fjwc...@gmail.com wrote: Running ZFSv28 on 64-bit FreeBSD 8-STABLE. I'd suggest trying to import the pool into snv_151a (Solaris 11 Express), which is the reference and development platform for ZFS. -B -- Brandon High : bh...@freaks.com

[zfs-discuss] Finding where dedup'd files are

2011-04-28 Thread Brandon High
] sha256 uncompressed LE contiguous unique unencrypted 1-copy size=2L/2P birth=236799L/236799P fill=1 cksum=55c9f21af6399be:11f9d4f5ff4cb109:2af8b798671e47ba:d19caf78da295df5 How can I translate this into datasets or files? -B -- Brandon High : bh...@freaks.com

Re: [zfs-discuss] Dedup and L2ARC memory requirements (again)

2011-04-28 Thread Brandon High
forces sha256. The default checksum used for deduplication is sha256 (subject to change). When dedup is enabled, the dedup checksum algorithm overrides the checksum property. -B -- Brandon High : bh...@freaks.com ___ zfs-discuss mailing list zfs

Re: [zfs-discuss] Dedup and L2ARC memory requirements (again)

2011-04-28 Thread Brandon High
-- Brandon High : bh...@freaks.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Finding where dedup'd files are

2011-04-28 Thread Brandon High
On Thu, Apr 28, 2011 at 3:48 PM, Ian Collins i...@ianshome.com wrote: Dedup is at the block, not file level. Files are usually composed of blocks. -B -- Brandon High : bh...@freaks.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http

Re: [zfs-discuss] Finding where dedup'd files are

2011-04-28 Thread Brandon High
from? Since I have some datasets with dedup'd data, I'm a little paranoid about tanking the system if they are destroyed. -B -- Brandon High : bh...@freaks.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman

Re: [zfs-discuss] Spare drives sitting idle in raidz2 with failed drive

2011-04-27 Thread Brandon High
On Wed, Apr 27, 2011 at 12:51 PM, Lamp Zy lam...@gmail.com wrote: Any ideas how to identify which drive is the one that failed so I can replace it? Try the following: # fmdump -eV # fmadm faulty -B -- Brandon High : bh...@freaks.com ___ zfs-discuss

Re: [zfs-discuss] Drive replacement speed

2011-04-26 Thread Brandon High
, but at 13 hours in, the resilver has been managing ~ 100M/s and is 70% done. -B -- Brandon High : bh...@freaks.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] Drive replacement speed

2011-04-25 Thread Brandon High
0.0 0.0 0.00.00.0 0 0 c0t0d0 0.00.00.00.0 0.0 0.00.00.0 0 0 c0t1d0 -- Brandon High : bh...@freaks.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs

Re: [zfs-discuss] Dedup and L2ARC memory requirements (again)

2011-04-25 Thread Brandon High
also be referred to by its shortened column name, volblock. -- Brandon High : bh...@freaks.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Drive replacement speed

2011-04-25 Thread Brandon High
1 1 c0t1d0 -- Brandon High : bh...@freaks.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Spare drives sitting idle in raidz2 with failed drive

2011-04-25 Thread Brandon High
with the first spare. (I'd suggest verifying the device names before running it.) # zpool replace fwgpool0 c4t5000C5001128FE4Dd0 c4t5000C50014D70072d0 -B -- Brandon High : bh...@freaks.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http

Re: [zfs-discuss] How does ZFS dedup space accounting work with quota?

2011-04-25 Thread Brandon High
:1. -B -- Brandon High : bh...@freaks.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Drive replacement speed

2011-04-25 Thread Brandon High
On Mon, Apr 25, 2011 at 5:26 PM, Brandon High bh...@freaks.com wrote: Setting zfs_resilver_delay seems to have helped some, based on the iostat output. Are there other tunables? I found zfs_resilver_min_time_ms while looking. I've tried bumping it up considerably, without much change. 'zpool

Re: [zfs-discuss] just can't import

2011-04-11 Thread Brandon High
and seem to hang until it's completed. -B -- Brandon High : bh...@freaks.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] just can't import

2011-04-11 Thread Brandon High
, I suspect from the constant writes.) -B -- Brandon High : bh...@freaks.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] just can't import

2011-04-10 Thread Brandon High
Solaris or Solaris 11 Express may complete it faster. Any tips greatly appreciated, Just wait... -B -- Brandon High : bh...@freaks.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] ZFS Going forward after Oracle - Let's get organized, let's get started.

2011-04-09 Thread Brandon High
to hear that there's a new feature being worked on, rather than the radio silence we've had. -B -- Brandon High : bh...@freaks.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] How to rename rpool. Is that recommended ?

2011-04-08 Thread Brandon High
? Yes you can do it, no it is not recommended. I had a need to do something similar to what you're attempting and ended up using a Live CD (which doesn't have an rpool to have a naming conflict) to do the manipulations. -B -- Brandon High : bh...@freaks.com

Re: [zfs-discuss] ZFS send/receive to Solaris/FBSD/OpenIndiana/Nexenta VM guest?

2011-04-07 Thread Brandon High
this, however. -B -- Brandon High : bh...@freaks.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] ZFS send/receive to Solaris/FBSD/OpenIndiana/Nexenta VM guest?

2011-04-06 Thread Brandon High
version. -B -- Brandon High : bh...@freaks.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] ZFS send/receive to Solaris/FBSD/OpenIndiana/Nexenta VM guest?

2011-04-06 Thread Brandon High
redundancy on its zfs storage, and not just multiple vdsk on the same host disk / lun. Either give it access to the raw devices, or use iSCSI, or create your vdsk on different luns and raidz them, etc. -B -- Brandon High : bh...@freaks.com ___ zfs-discuss

Re: [zfs-discuss] NTFS on NFS and iSCSI always generates small IO's

2011-03-10 Thread Brandon High
, but it certainly won't hurt. -B -- Brandon High : bh...@freaks.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] NTFS on NFS and iSCSI always generates small IO's

2011-03-10 Thread Brandon High
. So you will need to create a new VM store after the recordsize is tuned. You can change the recordsize and copy the vmdk files on the nfs server, which will re-write them with a smaller recordsize. -B -- Brandon High : bh...@freaks.com ___ zfs

Re: [zfs-discuss] Slices and reservations Was: Re: How long should an empty destroy take? snv_134

2011-03-07 Thread Brandon High
a nice safety net to have. -B -- Brandon High : bh...@freaks.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] ZFS send/recv horribly slow on system with 1800+ filesystems

2011-03-01 Thread Brandon High
. 2.) Why do we see 4MB-8MB/s of *writes* to the filesystem when we do a 'zfs send' to /dev/null ? Is anything else using the filesystems in the pool? -B -- Brandon High : bh...@freaks.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] ZFS Performance

2011-02-28 Thread Brandon High
On Sun, Feb 27, 2011 at 7:35 PM, Brandon High bh...@freaks.com wrote: It moves from best fit to any fit at a certain point, which is at ~ 95% (I think). Best fit looks for a large contiguous space to avoid fragmentation while any fit looks for any free space. I got the terminology wrong, it's

Re: [zfs-discuss] External SATA drive enclosures + ZFS?

2011-02-27 Thread Brandon High
4 drives, with an expansion slot for an additional controller. I think some people have reported success with these on the list. -B -- Brandon High : bh...@freaks.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org

Re: [zfs-discuss] External SATA drive enclosures + ZFS?

2011-02-27 Thread Brandon High
power and lower minimum receive power. An internal power might work with a SATA to eSATA cable or adapter, but it's not guaranteed to. -B -- Brandon High : bh...@freaks.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http

Re: [zfs-discuss] ZFS Performance

2011-02-27 Thread Brandon High
from best fit to any fit at a certain point, which is at ~ 95% (I think). Best fit looks for a large contiguous space to avoid fragmentation while any fit looks for any free space. -B -- Brandon High : bh...@freaks.com ___ zfs-discuss mailing list zfs

Re: [zfs-discuss] What drives?

2011-02-26 Thread Brandon High
they exist. -B -- Brandon High : bh...@freaks.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] External SATA drive enclosures + ZFS?

2011-02-25 Thread Brandon High
CPU time. What about an inexpensive SAS card (eg: Supermicro AOC-USAS-L4i) and external SAS enclosure (eg: Sans Digital TowerRAID TR4X). It would cost about $350 for the setup. -B -- Brandon High : bh...@freaks.com ___ zfs-discuss mailing list zfs

Re: [zfs-discuss] ZFS/Drobo (Newbie) Question

2011-02-08 Thread Brandon High
assertion doesn't seem to hold up. I think he meant that if one drive in a mirror dies completely, then any single read error on the remaining drive is not recoverable. With raidz2 (or a 3-way mirror for that matter), if one drive dies completely, you still have redundancy. -B -- Brandon High

Re: [zfs-discuss] ZFS/Drobo (Newbie) Question

2011-02-07 Thread Brandon High
not recommended to use different levels of redundancy in a pool, so you may want to consider using mirrors for everything. This also makes it easier to add or upgrade capacity later. -B -- Brandon High : bh...@freaks.com ___ zfs-discuss mailing list zfs-discuss

Re: [zfs-discuss] Understanding directio, O_DSYNC and zfs_nocacheflush on ZFS

2011-02-07 Thread Brandon High
different beast than UFS and doesn't require the same tuning. -B -- Brandon High : bh...@freaks.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Understanding directio, O_DSYNC and zfs_nocacheflush on ZFS

2011-02-07 Thread Brandon High
are being cached, because any data that is written synchronously will be committed to stable storage before the write returns. -B -- Brandon High : bh...@freaks.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org

Re: [zfs-discuss] ZFS and spindle speed (7.2k / 10k / 15k)

2011-02-06 Thread Brandon High
is less likely with the lower density? More platters leads to more heat and higher power consumption. Most drives are 3 or 4 platters, though Hitachi usually manufactures 5 platter drives as well. -B -- Brandon High : bh...@freaks.com ___ zfs-discuss

Re: [zfs-discuss] ZFS and spindle speed (7.2k / 10k / 15k)

2011-02-02 Thread Brandon High
the current batch of 3TB drives are 7200 RPM with 5 platters and 667GB per platter or 5400 RPM with 4 platters at 750GB/platter. -B -- Brandon High : bh...@freaks.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org

Re: [zfs-discuss] ZFS and TRIM

2011-01-29 Thread Brandon High
On Sat, Jan 29, 2011 at 8:31 AM, Edward Ned Harvey opensolarisisdeadlongliveopensola...@nedharvey.com wrote: What is the status of ZFS support for TRIM? I believe it's been supported for a while now. http://www.c0t0d0s0.org/archives/6792-SATA-TRIM-support-in-Opensolaris.html -B -- Brandon

Re: [zfs-discuss] reliable, enterprise worthy JBODs?

2011-01-26 Thread Brandon High
, and I've found them to all be about the same. -B -- Brandon High : bh...@freaks.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] reliable, enterprise worthy JBODs?

2011-01-25 Thread Brandon High
RAID arrays at it. Off the top of my head, I can think of 3 sources: LSI, Dell and Supermicro. LSI sells the 620J and 630J. I believe these are what Dell re-labels as the M1000. Supermicro makes server chassis and sells JBOD kits. There are many more, if you take time to look. -B -- Brandon

Re: [zfs-discuss] Is my bottleneck RAM?

2011-01-20 Thread Brandon High
a second disk might destroy your data. With raidz2, you can lose any 2 disks, but you pay for it with somewhat lower performance. -B -- Brandon High : bh...@freaks.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org

  1   2   3   4   5   >