Re: [zfs-discuss] Heavy write IO for no apparent reason

2013-01-18 Thread Freddie Cash
no pools are specified, statistics for every pool in the system is shown. If count is specified, the command exits after count reports are printed. :D -- Freddie Cash fjwc...@gmail.com ___ zfs-discuss mailing list zfs-discuss@open

Re: [zfs-discuss] any more efficient way to transfer snapshot between two hosts than ssh tunnel?

2012-12-13 Thread Freddie Cash
On Dec 13, 2012 8:02 PM, "Fred Liu" wrote: > > Assuming in a secure and trusted env, we want to get the maximum transfer speed without the overhead from ssh. Add the HPN patches to OpenSSH and enable the NONE cipher. We can saturate a gigabits link (980 mbps) between two FreeBSD hosts using that

Re: [zfs-discuss] S11 vs illumos zfs compatiblity

2012-12-13 Thread Freddie Cash
ities. > > > ___ > zfs-discuss mailing list > zfs-discuss@opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > > -- Freddie Cash fjwc...@gmail.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Remove disk

2012-12-06 Thread Freddie Cash
a ZFS pool has metadata on it, that includes which pool it's part of, which vdev it's part of, etc. Thus, if you do an export followed by an import, then ZFS will read the metadata off the disks and sort things out automatically. -- Freddie Cash fjwc...@gmail.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Question about degraded drive

2012-11-27 Thread Freddie Cash
And you can try 'zpool online' on the failed drive to see if it comes back online. On Nov 27, 2012 6:08 PM, "Freddie Cash" wrote: > You don't use replace on mirror vdevs. > > 'zpool detach' the failed drive. Then 'zpool attach' the new drive

Re: [zfs-discuss] Question about degraded drive

2012-11-27 Thread Freddie Cash
You don't use replace on mirror vdevs. 'zpool detach' the failed drive. Then 'zpool attach' the new drive. On Nov 27, 2012 6:00 PM, "Chris Dunbar - Earthside, LLC" < cdun...@earthside.net> wrote: > Hello, > > ** ** > > I have a degraded mirror set and this is has happened a few times (not > a

Re: [zfs-discuss] Repairing corrupted ZFS pool

2012-11-19 Thread Freddie Cash
. Create new filesystem. rsync data from /path/to/filesystem/.zfs/snapshot/snapname/ to new filesystem Snapshot new filesystem. rsync data from /path/to/filesystem/.zfs/snapshot/snapname+1/ to new filesystem Snapshot new filesystem See if zfs diff works. If it does, repeat the rsync/snapshot steps f

Re: [zfs-discuss] Intel DC S3700

2012-11-13 Thread Freddie Cash
Anandtech.com has a thorough review of it. Performance is consistent (within 10-15% IOPS) across the lifetime of the drive, has capacitors to flush RAM cache to disk, and doesn't store user data in the cache. It's also cheaper per GB than the 710 it replaces. On 2012-11-13 3:32 PM, "Jim Klimov" wr

Re: [zfs-discuss] ZFS best practice for FreeBSD?

2012-10-13 Thread Freddie Cash
Ah, okay, that makes sense. I wasn't offended, just confused. :) Thanks for the clarification On Oct 13, 2012 2:01 AM, "Jim Klimov" wrote: > 2012-10-12 19:34, Freddie Cash пишет: > >> On Fri, Oct 12, 2012 at 3:28 AM, Jim Klimov wrote: >> >>> In fact

Re: [zfs-discuss] ZFS best practice for FreeBSD?

2012-10-12 Thread Freddie Cash
y home file server ran with mixed vdevs for awhile (a 2 IDE-disk mirror vdev with a 3 SATA-disk raidz1 vdev) as it was built using scrounged parts. But all my work file servers have matched vdevs. -- Freddie Cash fjwc...@gmail.com ___ zfs-discu

Re: [zfs-discuss] ZFS best practice for FreeBSD?

2012-10-11 Thread Freddie Cash
- gpt/log 1.98G 460K 1.98G - cache - - - - - - gpt/cache1 32.0G 32.0G 8M - -- Freddie Cash fjwc...@gmail.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] removing upgrade notice from 'zpool status -x'

2012-10-04 Thread Freddie Cash
On Thu, Oct 4, 2012 at 9:45 AM, Jim Klimov wrote: > 2012-10-04 20:36, Freddie Cash пишет: >> >> On Thu, Oct 4, 2012 at 9:14 AM, Richard Elling >> wrote: >>> >>> On Oct 4, 2012, at 8:58 AM, Jan Owoc wrote: >>> The return code for zpool is ambigu

Re: [zfs-discuss] removing upgrade notice from 'zpool status -x'

2012-10-04 Thread Freddie Cash
before. Not sure why I didn't see "health" in the list of pool properties all the times I've read the zpool man page. -- Freddie Cash fjwc...@gmail.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] vm server storage mirror

2012-09-26 Thread Freddie Cash
If you're willing to try FreeBSD, there's HAST (aka high availability storage) for this very purpose. You use hast to create mirror pairs using 1 disk from each box, thus creating /dev/hast/* nodes. Then you use those to create the zpool one the 'primary' box. All writes to the pool on the primar

Re: [zfs-discuss] finding smallest drive that can be used to replace

2012-09-05 Thread Freddie Cash
Query the size of the other drives in the vdev, obviously. ;) So long as the replacement is larger than the smallest remaining drive, it'll work. On Sep 5, 2012 8:57 AM, "Yaverot" wrote: > > > --- skiselkov...@gmail.com wrote: > >On 09/05/2012 05:06 AM, Yaverot wrote: > > "What is the smallest si

Re: [zfs-discuss] ZIL devices and fragmentation

2012-07-30 Thread Freddie Cash
the -c option. -D Imports destroyed pools only. The -f option is also required. -f Forces import, even if the pool appears to be potentially active. -m Enables import with missing log devices. -- Freddie Cash fjwc...@gmail.com

Re: [zfs-discuss] encfs on top of zfs

2012-07-30 Thread Freddie Cash
> encryption) affect zfs specific features like data Integrity and > deduplication? If you are using FreeBSD, why not use GELI to provide the block devices used for the ZFS vdevs? That's the "standard" way to get encryption and ZFS working on

Re: [zfs-discuss] ZIL devices and fragmentation

2012-07-30 Thread Freddie Cash
orce a new import, though, but it didn't boot up > normally, and told me it couldn't import its pool due to lack of SLOG devices. Positive. :) I tested it with ZFSv28 on FreeBSD 9-STABLE a month or two ago. See the updated man page for zpool, especially the bit about "import -m&

Re: [zfs-discuss] ZIL devices and fragmentation

2012-07-30 Thread Freddie Cash
arate device), and > then lose this SLOG (disk crash etc), you will probably lose the pool. So if > you want/need SLOG, you probably want two of them in a mirror… That's only true on older versions of ZFS. ZFSv19 (or 20?) includes the ability to import a pool with a failed/missing log dev

Re: [zfs-discuss] Question on 4k sectors

2012-07-19 Thread Freddie Cash
y size of sectors you want. This can be used to create ashift=12 vdevs on top of 512B, pseudo-512B, or 4K drives. # gnop -S 4096 da{0,1,2,3,4,5,6,7} # zpool create pool raidz2 da{0,1,2,3,4,5,6,7}.nop # zpool export pool # gnop destroy da{0,1,2,3,4,5,6,7}.nop # zpool

Re: [zfs-discuss] Broken ZFS filesystem

2012-05-08 Thread Freddie Cash
On Tue, May 8, 2012 at 10:24 AM, Freddie Cash wrote: > I have an interesting issue with one single ZFS filesystem in a pool. > All the other filesystems are fine, and can be mounted, snapshoted, > destroyed, etc.  But this one filesystem, if I try to do any operation > on it (zf

[zfs-discuss] Broken ZFS filesystem

2012-05-08 Thread Freddie Cash
sratio 5.93x -- Freddie Cash fjwc...@gmail.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] cluster vs nfs

2012-04-26 Thread Freddie Cash
On Thu, Apr 26, 2012 at 4:34 AM, Deepak Honnalli wrote: >    cachefs is present in Solaris 10. It is EOL'd in S11. And for those who need/want to use Linux, the equivalent is FSCache. -- Freddie Cash fjwc...@gmail.com ___ zfs-discuss mailing

Re: [zfs-discuss] Aaron Toponce: Install ZFS on Debian GNU/Linux

2012-04-18 Thread Freddie Cash
hey have encryption and we don't? Can it be backported to illumos ..." It's too bad Oracle hasn't followed through (yet?) with their promise to open-source the ZFS (and other CDDL-licensed?) code in Solaris 11. :( -- Freddie Cash fjwc...@gmail.com ___

Re: [zfs-discuss] Drive upgrades

2012-04-13 Thread Freddie Cash
ing added a "if the blockcount is within 10%, then allow the replace to succeed" feature, to work around this issue? -- Freddie Cash fjwc...@gmail.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Apple's ZFS-alike - Re: Does raidzN actually protect against bitrot? If yes - how?

2012-01-16 Thread Freddie Cash
g/users/bfriesen/ > GraphicsMagick Maintainer,    http://www.GraphicsMagick.org/ > ___ > zfs-discuss mailing list > zfs-discuss@opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss -- Freddie Cash fjwc...@gmail.com __

Re: [zfs-discuss] SAS HBA's with No Raid

2011-12-06 Thread Freddie Cash
set, so it's limited to 2 TB harddrives: http://www.supermicro.com/products/accessories/addon/AOC-USAS-L4i_R.cfm You could always check if there's an IT-mode firmware for the 921204i4e card available on the LSI website, and flash that onto the card. That "disables"/removes the RAI

Re: [zfs-discuss] ZFS not starting

2011-12-01 Thread Freddie Cash
erver temporarily to get things working on this box again. > # sysctl hw.physmem > hw.physmem: 6363394048 > > # sysctl vfs.zfs.arc_max > vfs.zfs.arc_max: 5045088256 > > (I lowered arc_max to 1GB but hasn't helped) > DO NOT LOWER THE ARC WHEN DEDUPE ENABLED!! -- Freddi

Re: [zfs-discuss] Remove corrupt files from snapshot

2011-11-15 Thread Freddie Cash
o ZFS, and create a pool using a mirror vdev. File-backed ZFS vdevs really should only be used for testing purposes. -- Freddie Cash fjwc...@gmail.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] about btrfs and zfs

2011-10-17 Thread Freddie Cash
On Mon, Oct 17, 2011 at 10:50 AM, Harry Putnam wrote: > Freddie Cash writes: > > > If you only want RAID0 or RAID1, then btrfs is okay. There's no support > for > > RAID5+ as yet, and it's been "in development" for a couple of years now. > > [...

Re: [zfs-discuss] about btrfs and zfs

2011-10-17 Thread Freddie Cash
tly only in Solaris 11) - built-in CIFS/NFS sharing (on Solaris-based systems; FreeBSD uses normal nfsd and Samba for this) - automatic hot-spares (on Solaris-based systems; FreeBSD only supports manual spares) - and more Maybe in another 5 years or so, Btrfs will be up to the po

Re: [zfs-discuss] zfs send and dedupe

2011-09-07 Thread Freddie Cash
8-STABLE/9-BETA. And whether or not "zfs send" is faster/better/easier/more reliable than rsyncing snapshots (which is what we do currently). Thanks for the info. -- Freddie Cash fjwc...@gmail.com ___ zfs-discuss mailing list zfs-discuss@

[zfs-discuss] zfs send and dedupe

2011-09-06 Thread Freddie Cash
Just curious if anyone has looked into the relationship between zpool dedupe, zfs zend dedupe, memory use, and network throughput. For example, does 'zfs send -D' use the same DDT as the pool? Or does it require more memory for it's own DDT, thus impacting performance of both? If you have a dedup

Re: [zfs-discuss] Space usage

2011-08-14 Thread Freddie Cash
dancy or anything like that, but does include some compression and other info (I believe). There's an excellent post in the archives that shows how "ls -l", du, df, "zfs list", and "zpool list" work, and what each sees as "d

Re: [zfs-discuss] Question: adding a single drive to a mirrored zpool

2011-06-24 Thread Freddie Cash
th that as long as the rest of my zpool remains intact. > Note: you will have 0 redundancy on the ENTIRE POOL, not just that one vdev. If that non-redundant vdev dies, you lose the entire pool. Are you willing to take that risk, if one of the new drives is already DoA?

Re: [zfs-discuss] question about COW and snapshots

2011-06-16 Thread Freddie Cash
___ > zfs-discuss mailing list > zfs-discuss@opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > -- Freddie Cash fjwc...@gmail.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] changing vdev types

2011-06-01 Thread Freddie Cash
On Wed, Jun 1, 2011 at 2:34 PM, Freddie Cash wrote: > On Wed, Jun 1, 2011 at 12:45 PM, Eric Sproul wrote: > >> On Wed, Jun 1, 2011 at 2:54 PM, Matt Harrison >> wrote: >> > Hi list, >> > >> > I've got a pool thats got a single raidz1 vdev.

Re: [zfs-discuss] changing vdev types

2011-06-01 Thread Freddie Cash
evs (raidz*, mirror, single) from a pool, so you can't "add" a new vdev and "remove" the old vdev to convert between vdev types. The only solution to the OP's question is to create a new pool, transfer the data, and destroy the old pool. There are several ways to do t

Re: [zfs-discuss] Compatibility between Sun-Oracle Fishworks appliance zfs and other zfs implementations

2011-05-26 Thread Freddie Cash
ttp://people.freebsd.org/~mm/patches/zfs/v28/ ZFS-on-FUSE for Linux currently only supports ZFSv23. So you can "safely" use Illumos, Nexenta, FreeBSD, etc with ZFSv28. You can also use Solaris 11 Express, so long as you don't upgrade the pool version (SolE includes ZFSv31

Re: [zfs-discuss] Solaris vs FreeBSD question

2011-05-18 Thread Freddie Cash
c Intel motherboard - 2.8 GHz P4 CPU - 3 SATA1 harddrives connected to motherboard, in a raidz1 vdev - 2 IDE harddrives connected to a Promise PCI controller, in a mirror vdev - 2 GB non-ECC SDRAM - 2 GB USB stick for the OS install - FreeBSD 8.2 -- Freddie Cas

Re: [zfs-discuss] Still no way to recover a "corrupted" pool

2011-05-16 Thread Freddie Cash
On Fri, Apr 29, 2011 at 5:17 PM, Brandon High wrote: > On Fri, Apr 29, 2011 at 1:23 PM, Freddie Cash wrote: >> Running ZFSv28 on 64-bit FreeBSD 8-STABLE. > > I'd suggest trying to import the pool into snv_151a (Solaris 11 > Express), which is the reference and devel

Re: [zfs-discuss] Faster copy from UFS to ZFS

2011-05-03 Thread Freddie Cash
I can tell, this is due almost > exclusively to the fact that rsync needs to build an in-memory table of all > work being done *before* it starts to copy. rsync 2.x works that way., building a complete list of files/directories to copy before starting the copy. rsync 3.x doesn't. 3.x

Re: [zfs-discuss] Still no way to recover a "corrupted" pool

2011-04-29 Thread Freddie Cash
On Fri, Apr 29, 2011 at 5:00 PM, Alexander J. Maidak wrote: > On Fri, 2011-04-29 at 16:21 -0700, Freddie Cash wrote: >> On Fri, Apr 29, 2011 at 1:23 PM, Freddie Cash wrote: >> > Is there anyway, yet, to import a pool with corrupted space_map >> > errors, or &qu

Re: [zfs-discuss] Still no way to recover a "corrupted" pool

2011-04-29 Thread Freddie Cash
On Fri, Apr 29, 2011 at 1:23 PM, Freddie Cash wrote: > Is there anyway, yet, to import a pool with corrupted space_map > errors, or "zio-io_type != ZIO_TYPE_WRITE" assertions? > > I have a pool comprised of 4 raidz2 vdevs of 6 drives each.  I have > almost 10 TB of data

[zfs-discuss] Still no way to recover a "corrupted" pool

2011-04-29 Thread Freddie Cash
nning, which were not killed by the shutdown process for some reason, which prevented 8 ZFS filesystems from being unmounted, which prevented the pool from being exported (even though I have a "zfs unmount -f" and "zpool export -f" fail-safe), which

Re: [zfs-discuss] Faster copy from UFS to ZFS

2011-04-29 Thread Freddie Cash
e-file --inplace (and other options), works extremely fast for updates. -- Freddie Cash fjwc...@gmail.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Dedup and L2ARC memory requirements (again)

2011-04-25 Thread Freddie Cash
On Mon, Apr 25, 2011 at 10:55 AM, Erik Trimble wrote: > Min block size is 512 bytes. Technically, isn't the minimum block size 2^(ashift value)? Thus, on 4 KB disks where the vdevs have an ashift=12, the minimum block size will be 4 KB. -- Freddie Cash fjwc...@g

Re: [zfs-discuss] A resilver record?

2011-03-21 Thread Freddie Cash
s each, and then it just started taking longer and longer for each drive. -- Freddie Cash fjwc...@gmail.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] "Invisible" snapshot/clone

2011-03-16 Thread Freddie Cash
28. And there are patches available for testing ZFSv28 on FreeBSD 8-STABLE. Let's keep the OS pot shots to a minimum, eh? -- Freddie Cash fjwc...@gmail.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] ZFS Dedup question

2011-01-28 Thread Freddie Cash
as the last block or three of the file will be different. Repeat changing different lines in the file, and watch as disk usage only increases a little, since the files still "share" (or have in common) a lot of blocks. ZFS dedupe happens at the block layer, not the file layer.

Re: [zfs-discuss] reliable, enterprise worthy JBODs?

2011-01-25 Thread Freddie Cash
, SAS connectors). Some consider those enterprise-grade (afterall, it's 6 Gbps SAS, multilaned, multipathed, but not multi-), some don't (it's not IBM/Oracle/HP/etc, oh noes!!). Chenbro also has similar setups to SuperMicro. Again, it's not "big-name storage company&qu

Re: [zfs-discuss] reliable, enterprise worthy JBODs?

2011-01-25 Thread Freddie Cash
, SAS connectors). Some consider those enterprise-grade (afterall, it's 6 Gbps SAS, multilaned, multipathed, but not multi-), some don't (it's not IBM/Oracle/HP/etc, oh noes!!). Chenbro also has similar setups to SuperMicro. Again, it's not "big-name storage company&qu

Re: [zfs-discuss] mixing drive sizes within a pool

2011-01-13 Thread Freddie Cash
ormance won't be as good as it could be due to the uneven striping, especially when the smaller vdevs get to be full. But it works. -- Freddie Cash fjwc...@gmail.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] A few questions

2010-12-16 Thread Freddie Cash
5 Creating 1 pool gives you the best performance and the most flexibility. Use separate filesystems on top of that pool if you want to tweak all the different properties. Going with 1 pool also increases your chances for dedupe, as dedupe is done at the pool level. -- Freddi

Re: [zfs-discuss] ZFS ... open source moving forward?

2010-12-10 Thread Freddie Cash
... Are they all screwed? ZFSv28 is available for FreeBSD 9-CURRENT. We won't know until after Oracle releases Solaris 11 whether or not they'll live up to their promise to open the source to ZFSv31. Until Solaris 11 is released, there's really not much point in debating

Re: [zfs-discuss] accidentally added a drive?

2010-12-06 Thread Freddie Cash
e I am afraid. > > .. or add a mirror to that drive, to keep some redundancy. And to ad4s1d as well, since it's also a stand-alone, non-redundand vdev. Since there are two drives that are non-redundant, it would probably be best to re-do the

Re: [zfs-discuss] how to quiesce and unquiesc zfs and zpool for array/hardware snapshots ?

2010-11-15 Thread Freddie Cash
e to a pool is to take the pool offline via zpool export. One more reason to stop using hardware storage systems and just let ZFS handle the drives directly. :) -- Freddie Cash fjwc...@gmail.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] Recovering from corrupt ZIL

2010-10-24 Thread Freddie Cash
rimental patches available for ZFSv28. -- Freddie Cash fjwc...@gmail.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] vdev failure -> pool loss ?

2010-10-18 Thread Freddie Cash
On Mon, Oct 18, 2010 at 8:51 AM, Darren J Moffat wrote: > On 18/10/2010 16:48, Freddie Cash wrote: >> >> On Mon, Oct 18, 2010 at 6:34 AM, Edward Ned Harvey >>  wrote: >>>> >>>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- >>&g

Re: [zfs-discuss] vdev failure -> pool loss ?

2010-10-18 Thread Freddie Cash
On Mon, Oct 18, 2010 at 6:34 AM, Edward Ned Harvey wrote: >> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- >> boun...@opensolaris.org] On Behalf Of Freddie Cash >> >> If you lose 1 vdev, you lose the pool. > > As long as 1 vdev is striped and not mi

Re: [zfs-discuss] vdev failure -> pool loss ?

2010-10-17 Thread Freddie Cash
AID-0 is lost. Similar for the pool. -- Freddie Cash fjwc...@gmail.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Optimal raidz3 configuration

2010-10-15 Thread Freddie Cash
've avoided any vdev with more than 8 drives in it. -- Freddie Cash fjwc...@gmail.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Increase size of 2-way mirror

2010-10-06 Thread Freddie Cash
gt; mirror-0 ONLINE 0 0 0 > c1t2d0 ONLINE 0 0 0 > c1t3d0 ONLINE 0 0 0 > mirror-1 ONLINE 0 0 0 > c1t4d0 ONLINE 0 0 0 > c1t5d0 ONLINE 0 0 0 -- Freddie Cash fjwc...@gmail.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://m

Re: [zfs-discuss] Is there a way to limit ZFS File Data but maintain room for the ARC to cache metadata

2010-10-01 Thread Freddie Cash
can be used and > keep the arc cache warm with metadata.  Any suggestions? Would adding a cache device (L2ARC) and setting primarycache=metadata and secondarycache=all on the root dataset do what you need? That way ARC is used strictly for metadata, and L2ARC is used for metad

Re: [zfs-discuss] Any zfs fault injection tools?

2010-09-24 Thread Freddie Cash
e from a disk while doing normal reads/writes is also fun. Using the controller software (if a RAID controller) to delete LUNs/disks is also fun. -- Freddie Cash fjwc...@gmail.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mai

Re: [zfs-discuss] create mirror copy of existing zfs stack

2010-09-20 Thread Freddie Cash
l name to the drive you are removing. You can then use that drive to create a new pool, thus creating a duplicate of the original pool. -- Freddie Cash fjwc...@gmail.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] recordsize

2010-09-16 Thread Freddie Cash
ly written data. Any existing data is not affected until it is re-written or copied. -- Freddie Cash fjwc...@gmail.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] resilver = defrag?

2010-09-09 Thread Freddie Cash
On Thu, Sep 9, 2010 at 1:26 PM, Freddie Cash wrote: > On Thu, Sep 9, 2010 at 1:04 PM, Orvar Korvar > wrote: >> A) Resilver = Defrag. True/false? > > False.  Resilver just rebuilds a drive in a vdev based on the > redundant data stored on the other drives in the vdev.  Simil

Re: [zfs-discuss] resilver = defrag?

2010-09-09 Thread Freddie Cash
I buy larger drives and resilver, does defrag happen? No. > C) Does zfs send zfs receive mean it will defrag? No. ZFS doesn't currently have a defragmenter. That will come when the legendary block pointer rewrite feature is committed. -- Freddie Cash fjw

Re: [zfs-discuss] Suggested RaidZ configuration...

2010-09-08 Thread Freddie Cash
don't think you'd be able to get a 500 GB SATA disk to resilver in a 24-disk raidz vdev (even a raidz1) in a 50% full pool. Especially if you are using the pool for anything at the same time. -- Freddie Cash fjwc...@gmail.com ___ zfs-discu

Re: [zfs-discuss] slog and TRIM support [SEC=UNCLASSIFIED]

2010-08-26 Thread Freddie Cash
e-only. -M uses MLC flash, which is optimised for fast reads. Ideal for an L2ARC which is (basically) read-only. -E tends to have smaller capacities, which is fine for ZIL. -M tends to have larger capacities, which is perfect for L2ARC. -- Freddie Cash fj

Re: [zfs-discuss] (preview) Whitepaper - ZFS Pools Explained - feedback welcome

2010-08-26 Thread Freddie Cash
ND opensolaris commands when a command is shown. I haven't finished reading it yet (okay, barely read through the contents list), but would you be interested in the FreeBSD equivalents for the commands, if they differ? -- Freddie Cash fjwc...@gmail.com

Re: [zfs-discuss] ZFS Storage server hardwae

2010-08-25 Thread Freddie Cash
reWire. If there's any way to run cables from inside the case, you can "make do" with plain SATA and longer cables. Otherwise, you'll need to look into something other than a MacMini for your storage box. -- Freddie Cash fjwc...@gmail.com ___

Re: [zfs-discuss] shrink zpool

2010-08-25 Thread Freddie Cash
On Wed, Aug 25, 2010 at 11:34 AM, Mike DeMarco wrote: > Is it currently or near future possible to shrink a zpool "remove a disk" Short answer: no. Long answer: search the archives for "block pointer rewrite" for all the gory details. :) -- Freddie

Re: [zfs-discuss] New Supermicro SAS/SATA controller: AOC-USAS2-L8e in SOHO NAS and HD HTPC

2010-08-16 Thread Freddie Cash
nd the ones in the middle have "simple" XOR engines for doing the RAID.stuff in hardware. -- Freddie Cash fjwc...@gmail.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Opensolaris is apparently dead

2010-08-14 Thread Freddie Cash
u really don't want to use the ports tree, there's pkg_upgrade (part of the bsdadminscripts port). IOW, if you don't want to compile things on FreeBSD, you don't have to. :) -- Freddie Cash fjwc...@gmail.com ___ zfs-dis

Re: [zfs-discuss] Reconfigure zpool

2010-08-06 Thread Freddie Cash
raidz3 ? Backup the data in the pool, destroy the pool, create a new pool (consider using multiple raidz vdevs instead of one giant raidz vdev), copy the data back. There's no other way. -- Freddie Cash fjwc...@gmail.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Optimal Disk configuration

2010-07-22 Thread Freddie Cash
ey on disks to have the same usable space. And, adding multiple raidz vdevs (each with under 10 disks) to a single pool (aka stripe of raidz) will give better performance than a single large raidz vdev. -- Freddie Cash fjwc...@gmail.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Confused about consumer drives and zfs can someone help?

2010-07-21 Thread Freddie Cash
rmance, use 2x 2-drive mirrors. For best redundancy, use 1x 4-drive raidz2. For middle-of-the-road performance/redundancy, use 1x 4-drive raidz1. Note: newegg.ca has a sale on right now. WD Caviar Black 1 TB drives are only $85 CDN. -- Freddie Cash fjwc...@gmail.com ___

Re: [zfs-discuss] ZFS on Ubuntu

2010-07-20 Thread Freddie Cash
#x27;t wait for FreeBSD to get ZFSv20+). But the zfs-fuse system was just too unstable to be usable for even simple testing. -- Freddie Cash fjwc...@gmail.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] how to create a concat vdev.

2010-07-19 Thread Freddie Cash
ol zpool rename newpool oldpool The commands are not exact, read the man pages to get the exact syntax for the send/recv part. However, doing so will make the pool extremely fragile. Any issues with any of the 8 LUNs, and the whole pool dies as there is no redundancy. -- Freddie

Re: [zfs-discuss] zfs send to remote any ideas for a faster way than ssh?

2010-07-19 Thread Freddie Cash
e able to saturate a 10G link using zfs send/recv, so long as both the systems can read/write that fast. http://www.psc.edu/networking/projects/hpn-ssh/ -- Freddie Cash fjwc...@gmail.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://ma

Re: [zfs-discuss] Recommended RAM for ZFS on various platforms

2010-07-16 Thread Freddie Cash
laptops). However, the "rule of thumb" for ZFS is 2 GB of RAM as a bare minimum, using the 64-bit version of FreeBSD. The "sweet spot" is 4 GB of RAM. But, more is always better. -- Freddie Cash fjwc...@gmail.com ___ zfs-discu

Re: [zfs-discuss] Encryption?

2010-07-11 Thread Freddie Cash
hard to keep it running. You definitely want to do the ZFS bits from within FreeBSD. -- Freddie Cash fjwc...@gmail.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Should i enable Write-Cache ?

2010-07-08 Thread Freddie Cash
ontroller into a "dumb" SATA controller). -- Freddie Cash fjwc...@gmail.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Remove non-redundant disk

2010-07-07 Thread Freddie Cash
k03 disk04 Replace one of the drives with a larger on (this may not be perfectly correct, going from memory): zpool attach poolname disk05 disk01 zpool detach poolname disk01 Carry on with the add and replace methods as needed until you have your 6-mirror pool. No vdev removals required. -- F

Re: [zfs-discuss] ZFS on Caviar Blue (Hard Drive Recommendations)

2010-06-30 Thread Freddie Cash
s. Attached to 3Ware 9550SXU and 9650SE RAID controllers, configured as Single Drive arrays. There's also 8 WD Caviar Green 1.5 TB drives in there, which are not very good (even after twiddling the idle timeout setting via wdidle3). Definitely avoid the Green/GP line of drives. -

Re: [zfs-discuss] ZFS on Ubuntu

2010-06-26 Thread Freddie Cash
On Sat, Jun 26, 2010 at 12:20 AM, Ben Miles wrote: > What supporting applications are there on Ubuntu for RAIDZ? None. Ubuntu doesn't officially support ZFS. You can kind of make it work using the ZFS-FUSE project. But it's not stable, nor recommended. -- Freddie Cash fjwc

Re: [zfs-discuss] ZFS on Ubuntu

2010-06-25 Thread Freddie Cash
ith patches available for ZFSv15 and ZFSv16. You'll get a more stable, better performant system than trying to shoehorn ZFS-FUSE into Ubuntu (we've tried with Debian, and ZFS-FUSE is good for short-term testing, but not production use). -- Freddie Cash fjwc...@gmail.com _

Re: [zfs-discuss] Erratic behavior on 24T zpool

2010-06-18 Thread Freddie Cash
o be made up of as few physical disks as possible (for your size and redundancy requirements), and your pool to be made up of as many vdevs as possible. -- Freddie Cash fjwc...@gmail.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http:

Re: [zfs-discuss] Complete Linux Noob

2010-06-16 Thread Freddie Cash
bly of the same configuration (all mirrors, all raidz1, all raidz2, etc). You can add vdevs to the pool at anytime. You cannot expand a raidz vdev by adding drives, though (convert a 4-drive raidz1 to a 5-drive raidz1). Nor can you convert between raidz types (4-drive raidz1 to

Re: [zfs-discuss] Complete Linux Noob

2010-06-15 Thread Freddie Cash
he same way you access any harddrive over the network: - NFS - SMB/CIFS - iSCSI - etc It just depends at what level you want to access the storage (files, shares, block devices, etc). -- Freddie Cash fjwc...@gmail.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] zpool export / import discrepancy

2010-06-15 Thread Freddie Cash
8 drives of the first vdev are on the first controller, all 8 drives of the third vdev are on the second controller, with the second vdev being split across both controllers. Everything is still running smoothly. -- Freddie Cash fjwc...@gmail.com ___ zf

Re: [zfs-discuss] Please trim posts

2010-06-11 Thread Freddie Cash
exity that makes everything super simple and easy for them ... and a royal pain for everyone else (kinda like Windows). :) In the end, it all comes down to user education. -- Freddie Cash fjwc...@gmail.com ___ zfs-discuss mailing list zfs-discuss@op

Re: [zfs-discuss] Native ZFS for Linux

2010-06-11 Thread Freddie Cash
On Fri, Jun 11, 2010 at 12:25 PM, Bob Friesenhahn < bfrie...@simple.dallas.tx.us> wrote: > On Fri, 11 Jun 2010, Freddie Cash wrote: > >> >> For the record, the following paragraph was incorrectly quoted by Bob. This paragraph was originally written by Erik Trimble: >

Re: [zfs-discuss] Native ZFS for Linux

2010-06-11 Thread Freddie Cash
an then call > from userland. Which is essentially what the ZFS FUSE folks have been > reduced to doing. > > The nvidia shim is only needed to be able to ship the non-GPL binary driver with the GPL binary kernel. If you don't use the binaries, you don't use the shim. -- Freddie Cash fjwc...@gmail.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] zfs list sizes - newbie question

2010-06-04 Thread Freddie Cash
"space available" output of various tools (like zfs list, df, etc). -- Freddie Cash fjwc...@gmail.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] ZFS Usage on drives

2010-06-04 Thread Freddie Cash
ittle and therefore only resilver 200-300Gb of data. > When in doubt, read the man page. :) zpool iostat -v -- Freddie Cash fjwc...@gmail.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] one more time: pool size changes

2010-06-03 Thread Freddie Cash
available in a raidz vdev, by replacing each drive in the raidz vdev with a larger drive. We just did this, going from 8x 500 GB drives in a raidz2 vdev, to 8x 1.5 TB drives in a raidz2 vdev. -- Freddie Cash fjwc...@gmail.com ___ zfs-discuss mailing list zf

Re: [zfs-discuss] one more time: pool size changes

2010-06-02 Thread Freddie Cash
o export/import the pool for the space to become available). We've used both of the above quite successfully, both at home and at work. Not sure what your buddy was talking about. :) -- Freddie Cash fjwc...@gmail.com ___ zfs-discuss

  1   2   >