, statistics for every pool in
the
system is shown. If count is specified, the command exits after
count
reports are printed.
:D
--
Freddie Cash
fjwc...@gmail.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Freddie Cash
fjwc...@gmail.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs
On Dec 13, 2012 8:02 PM, Fred Liu fred_...@issi.com wrote:
Assuming in a secure and trusted env, we want to get the maximum transfer
speed without the overhead from ssh.
Add the HPN patches to OpenSSH and enable the NONE cipher. We can saturate
a gigabits link (980 mbps) between two FreeBSD
the disks
and sort things out automatically.
--
Freddie Cash
fjwc...@gmail.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
You don't use replace on mirror vdevs.
'zpool detach' the failed drive. Then 'zpool attach' the new drive.
On Nov 27, 2012 6:00 PM, Chris Dunbar - Earthside, LLC
cdun...@earthside.net wrote:
Hello,
** **
I have a degraded mirror set and this is has happened a few times (not
always the
And you can try 'zpool online' on the failed drive to see if it comes back
online.
On Nov 27, 2012 6:08 PM, Freddie Cash fjwc...@gmail.com wrote:
You don't use replace on mirror vdevs.
'zpool detach' the failed drive. Then 'zpool attach' the new drive.
On Nov 27, 2012 6:00 PM, Chris Dunbar
/to/filesystem/.zfs/snapshot/snapname/ to new filesystem
Snapshot new filesystem.
rsync data from /path/to/filesystem/.zfs/snapshot/snapname+1/ to new filesystem
Snapshot new filesystem
See if zfs diff works.
If it does, repeat the rsync/snapshot steps for the rest of the snapshots.
--
Freddie
Anandtech.com has a thorough review of it. Performance is consistent
(within 10-15% IOPS) across the lifetime of the drive, has capacitors to
flush RAM cache to disk, and doesn't store user data in the cache. It's
also cheaper per GB than the 710 it replaces.
On 2012-11-13 3:32 PM, Jim Klimov
Ah, okay, that makes sense. I wasn't offended, just confused. :)
Thanks for the clarification
On Oct 13, 2012 2:01 AM, Jim Klimov jimkli...@cos.ru wrote:
2012-10-12 19:34, Freddie Cash пишет:
On Fri, Oct 12, 2012 at 3:28 AM, Jim Klimov jimkli...@cos.ru wrote:
In fact, you can (although
server ran with mixed vdevs for awhile (a 2 IDE-disk
mirror vdev with a 3 SATA-disk raidz1 vdev) as it was built using
scrounged parts.
But all my work file servers have matched vdevs.
--
Freddie Cash
fjwc...@gmail.com
___
zfs-discuss mailing list
zfs
- - - - - -
gpt/cache1 32.0G 32.0G 8M -
--
Freddie Cash
fjwc...@gmail.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
see health in the list of pool properties all
the times I've read the zpool man page.
--
Freddie Cash
fjwc...@gmail.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Thu, Oct 4, 2012 at 9:45 AM, Jim Klimov jimkli...@cos.ru wrote:
2012-10-04 20:36, Freddie Cash пишет:
On Thu, Oct 4, 2012 at 9:14 AM, Richard Elling richard.ell...@gmail.com
wrote:
On Oct 4, 2012, at 8:58 AM, Jan Owoc jso...@gmail.com wrote:
The return code for zpool is ambiguous. Do
If you're willing to try FreeBSD, there's HAST (aka high availability
storage) for this very purpose.
You use hast to create mirror pairs using 1 disk from each box, thus
creating /dev/hast/* nodes. Then you use those to create the zpool one the
'primary' box.
All writes to the pool on the
Query the size of the other drives in the vdev, obviously. ;) So long as
the replacement is larger than the smallest remaining drive, it'll work.
On Sep 5, 2012 8:57 AM, Yaverot yave...@computermail.net wrote:
--- skiselkov...@gmail.com wrote:
On 09/05/2012 05:06 AM, Yaverot wrote:
What is
SLOG, you probably want two of them in a mirror…
That's only true on older versions of ZFS. ZFSv19 (or 20?) includes
the ability to import a pool with a failed/missing log device. You
lose any data that is in the log and not in the pool, but the pool is
importable.
--
Freddie Cash
fjwc
to lack of SLOG devices.
Positive. :) I tested it with ZFSv28 on FreeBSD 9-STABLE a month or
two ago. See the updated man page for zpool, especially the bit about
import -m. :)
--
Freddie Cash
fjwc...@gmail.com
___
zfs-discuss mailing list
zfs-discuss
encryption) affect zfs specific features like data Integrity and
deduplication?
If you are using FreeBSD, why not use GELI to provide the block
devices used for the ZFS vdevs? That's the standard way to get
encryption and ZFS working on FreeBSD.
--
Freddie Cash
fjwc...@gmail.com
destroyed pools only. The -f option is also required.
-f Forces import, even if the pool appears to be potentially
active.
-m Enables import with missing log devices.
--
Freddie Cash
fjwc...@gmail.com
___
zfs
be
used to create ashift=12 vdevs on top of 512B, pseudo-512B, or 4K
drives.
# gnop -S 4096 da{0,1,2,3,4,5,6,7}
# zpool create pool raidz2 da{0,1,2,3,4,5,6,7}.nop
# zpool export pool
# gnop destroy da{0,1,2,3,4,5,6,7}.nop
# zpool import -d /dev pool
--
Freddie Cash
fjwc...@gmail.com
5.93x
--
Freddie Cash
fjwc...@gmail.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Tue, May 8, 2012 at 10:24 AM, Freddie Cash fjwc...@gmail.com wrote:
I have an interesting issue with one single ZFS filesystem in a pool.
All the other filesystems are fine, and can be mounted, snapshoted,
destroyed, etc. But this one filesystem, if I try to do any operation
on it (zfs
On Thu, Apr 26, 2012 at 4:34 AM, Deepak Honnalli
deepak.honna...@oracle.com wrote:
cachefs is present in Solaris 10. It is EOL'd in S11.
And for those who need/want to use Linux, the equivalent is FSCache.
--
Freddie Cash
fjwc...@gmail.com
___
zfs
encryption and we don't?
Can it be backported to illumos ...
It's too bad Oracle hasn't followed through (yet?) with their promise
to open-source the ZFS (and other CDDL-licensed?) code in Solaris 11.
:(
--
Freddie Cash
fjwc...@gmail.com
___
zfs-discuss mailing
/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Freddie Cash
fjwc...@gmail.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
/products/accessories/addon/AOC-USAS-L4i_R.cfm
You could always check if there's an IT-mode firmware for the 921204i4e
card available on the LSI website, and flash that onto the card. That
disables/removes the RAID functionality from the card, turning it into
just an HBA.
--
Freddie Cash
fjwc
again.
# sysctl hw.physmem
hw.physmem: 6363394048
# sysctl vfs.zfs.arc_max
vfs.zfs.arc_max: 5045088256
(I lowered arc_max to 1GB but hasn't helped)
DO NOT LOWER THE ARC WHEN DEDUPE ENABLED!!
--
Freddie Cash
fjwc...@gmail.com
___
zfs-discuss
really should only be used for testing purposes.
--
Freddie Cash
fjwc...@gmail.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
manual spares)
- and more
Maybe in another 5 years or so, Btrfs will be up to the point of ZFS today.
Just image where ZFS will be in 5 years of so. :)
--
Freddie Cash
fjwc...@gmail.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
. And whether or not zfs send is faster/better/easier/more
reliable than rsyncing snapshots (which is what we do currently).
Thanks for the info.
--
Freddie Cash
fjwc...@gmail.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
Just curious if anyone has looked into the relationship between zpool
dedupe, zfs zend dedupe, memory use, and network throughput.
For example, does 'zfs send -D' use the same DDT as the pool? Or does it
require more memory for it's own DDT, thus impacting performance of both?
If you have a
in the archives that shows how ls -l, du, df,
zfs list, and zpool list work, and what each sees as disk usage.
Don't remember exactly who wrote it, though. It should definitely be added
to the ZFS Admin Guide, though. :)
--
Freddie Cash
fjwc...@gmail.com
of my zpool remains intact.
Note: you will have 0 redundancy on the ENTIRE POOL, not just that one
vdev. If that non-redundant vdev dies, you lose the entire pool.
Are you willing to take that risk, if one of the new drives is already DoA?
--
Freddie Cash
fjwc...@gmail.com
--
Freddie Cash
fjwc...@gmail.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
.
The only solution to the OP's question is to create a new pool, transfer the
data, and destroy the old pool. There are several ways to do this.
--
Freddie Cash
fjwc...@gmail.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
On Wed, Jun 1, 2011 at 2:34 PM, Freddie Cash fjwc...@gmail.com wrote:
On Wed, Jun 1, 2011 at 12:45 PM, Eric Sproul espr...@omniti.com wrote:
On Wed, Jun 1, 2011 at 2:54 PM, Matt Harrison
iwasinnamuk...@genestate.com wrote:
Hi list,
I've got a pool thats got a single raidz1 vdev. I've
you can safely use Illumos, Nexenta, FreeBSD, etc with ZFSv28. You can
also use Solaris 11 Express, so long as you don't upgrade the pool version
(SolE includes ZFSv31).
--
Freddie Cash
fjwc...@gmail.com
___
zfs-discuss mailing list
zfs-discuss
Intel motherboard
- 2.8 GHz P4 CPU
- 3 SATA1 harddrives connected to motherboard, in a raidz1 vdev
- 2 IDE harddrives connected to a Promise PCI controller, in a mirror vdev
- 2 GB non-ECC SDRAM
- 2 GB USB stick for the OS install
- FreeBSD 8.2
--
Freddie Cash
fjwc...@gmail.com
On Fri, Apr 29, 2011 at 5:17 PM, Brandon High bh...@freaks.com wrote:
On Fri, Apr 29, 2011 at 1:23 PM, Freddie Cash fjwc...@gmail.com wrote:
Running ZFSv28 on 64-bit FreeBSD 8-STABLE.
I'd suggest trying to import the pool into snv_151a (Solaris 11
Express), which is the reference
that way., building a complete list of
files/directories to copy before starting the copy.
rsync 3.x doesn't. 3.x builds an initial file list for the first
directory and then starts copying files while continuing to build the
list of files, so there's only a small pause at the beginning.
--
Freddie
, with --no-whole-file --inplace (and other options), works
extremely fast for updates.
--
Freddie Cash
fjwc...@gmail.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
filesystems
from being unmounted, which prevented the pool from being exported
(even though I have a zfs unmount -f and zpool export -f
fail-safe), which locked up the shutdown process requiring a power
reset.
:(
--
Freddie Cash
fjwc...@gmail.com
___
zfs
On Fri, Apr 29, 2011 at 1:23 PM, Freddie Cash fjwc...@gmail.com wrote:
Is there anyway, yet, to import a pool with corrupted space_map
errors, or zio-io_type != ZIO_TYPE_WRITE assertions?
I have a pool comprised of 4 raidz2 vdevs of 6 drives each. I have
almost 10 TB of data in the pool (3
On Fri, Apr 29, 2011 at 5:00 PM, Alexander J. Maidak ajmai...@mchsi.com wrote:
On Fri, 2011-04-29 at 16:21 -0700, Freddie Cash wrote:
On Fri, Apr 29, 2011 at 1:23 PM, Freddie Cash fjwc...@gmail.com wrote:
Is there anyway, yet, to import a pool with corrupted space_map
errors, or zio-io_type
On Mon, Apr 25, 2011 at 10:55 AM, Erik Trimble erik.trim...@oracle.com wrote:
Min block size is 512 bytes.
Technically, isn't the minimum block size 2^(ashift value)? Thus, on
4 KB disks where the vdevs have an ashift=12, the minimum block size
will be 4 KB.
--
Freddie Cash
fjwc...@gmail.com
, and then it just started taking longer and
longer for each drive.
--
Freddie Cash
fjwc...@gmail.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
or three
of the file will be different.
Repeat changing different lines in the file, and watch as disk usage
only increases a little, since the files still share (or have in
common) a lot of blocks.
ZFS dedupe happens at the block layer, not the file layer.
--
Freddie Cash
fjwc...@gmail.com
Gbps SAS,
multilaned, multipathed, but not multi-), some don't (it's not
IBM/Oracle/HP/etc, oh noes!!).
Chenbro also has similar setups to SuperMicro. Again, it's not
big-name storage company nor uber-expensive, but the technology is
the same. Is that enterprise-grade?
:D
--
Freddie Cash
Gbps SAS,
multilaned, multipathed, but not multi-), some don't (it's not
IBM/Oracle/HP/etc, oh noes!!).
Chenbro also has similar setups to SuperMicro. Again, it's not
big-name storage company nor uber-expensive, but the technology is
the same. Is that enterprise-grade?
:D
--
Freddie Cash
the smaller vdevs get to be full. But it
works.
--
Freddie Cash
fjwc...@gmail.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Creating 1 pool gives you the best performance and the most
flexibility. Use separate filesystems on top of that pool if you want
to tweak all the different properties.
Going with 1 pool also increases your chances for dedupe, as dedupe is
done at the pool level.
--
Freddie Cash
fjwc...@gmail.com
it.
--
Freddie Cash
fjwc...@gmail.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
a mirror to that drive, to keep some redundancy.
And to ad4s1d as well, since it's also a stand-alone, non-redundand vdev.
Since there are two drives that are non-redundant, it would probably
be best to re-do the pool.
--
Freddie Cash
fjwc...@gmail.com
via zpool export.
One more reason to stop using hardware storage systems and just let
ZFS handle the drives directly. :)
--
Freddie Cash
fjwc...@gmail.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
) has experimental patches
available for ZFSv28.
--
Freddie Cash
fjwc...@gmail.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Mon, Oct 18, 2010 at 6:34 AM, Edward Ned Harvey sh...@nedharvey.com wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Freddie Cash
If you lose 1 vdev, you lose the pool.
As long as 1 vdev is striped and not mirrored, that's true
On Mon, Oct 18, 2010 at 8:51 AM, Darren J Moffat
darr...@opensolaris.org wrote:
On 18/10/2010 16:48, Freddie Cash wrote:
On Mon, Oct 18, 2010 at 6:34 AM, Edward Ned Harveysh...@nedharvey.com
wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org
is lost.
Similar for the pool.
--
Freddie Cash
fjwc...@gmail.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
.
Since then, I've avoided any vdev with more than 8 drives in it.
--
Freddie Cash
fjwc...@gmail.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
0
mirror-0 ONLINE 0 0 0
c1t2d0 ONLINE 0 0 0
c1t3d0 ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
c1t4d0 ONLINE 0 0 0
c1t5d0 ONLINE 0 0 0
--
Freddie Cash
fjwc...@gmail.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org
and
keep the arc cache warm with metadata. Any suggestions?
Would adding a cache device (L2ARC) and setting primarycache=metadata
and secondarycache=all on the root dataset do what you need?
That way ARC is used strictly for metadata, and L2ARC is used for metadata+data.
--
Freddie Cash
fjwc
while doing normal
reads/writes is also fun.
Using the controller software (if a RAID controller) to delete
LUNs/disks is also fun.
--
Freddie Cash
fjwc...@gmail.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org
to create a new pool, thus creating a duplicate of
the original pool.
--
Freddie Cash
fjwc...@gmail.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
. Any existing data is not affected
until it is re-written or copied.
--
Freddie Cash
fjwc...@gmail.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
RAID array.
B) If I buy larger drives and resilver, does defrag happen?
No.
C) Does zfs send zfs receive mean it will defrag?
No.
ZFS doesn't currently have a defragmenter. That will come when the
legendary block pointer rewrite feature is committed.
--
Freddie Cash
fjwc...@gmail.com
On Thu, Sep 9, 2010 at 1:26 PM, Freddie Cash fjwc...@gmail.com wrote:
On Thu, Sep 9, 2010 at 1:04 PM, Orvar Korvar
knatte_fnatte_tja...@yahoo.com wrote:
A) Resilver = Defrag. True/false?
False. Resilver just rebuilds a drive in a vdev based on the
redundant data stored on the other drives
raidz vdev (even a
raidz1) in a 50% full pool. Especially if you are using the pool for
anything at the same time.
--
Freddie Cash
fjwc...@gmail.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs
is shown.
I haven't finished reading it yet (okay, barely read through the
contents list), but would you be interested in the FreeBSD equivalents
for the commands, if they differ?
--
Freddie Cash
fjwc...@gmail.com
___
zfs-discuss mailing list
zfs
is (basically) write-only.
-M uses MLC flash, which is optimised for fast reads. Ideal for an
L2ARC which is (basically) read-only.
-E tends to have smaller capacities, which is fine for ZIL.
-M tends to have larger capacities, which is perfect for L2ARC.
--
Freddie Cash
fjwc...@gmail.com
cables from inside the case, you can make
do with plain SATA and longer cables.
Otherwise, you'll need to look into something other than a MacMini for
your storage box.
--
Freddie Cash
fjwc...@gmail.com
___
zfs-discuss mailing list
zfs-discuss
.
And the ones in the middle have simple XOR engines for doing the
RAID.stuff in hardware.
--
Freddie Cash
fjwc...@gmail.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
to use the ports tree, there's pkg_upgrade (part of
the bsdadminscripts port).
IOW, if you don't want to compile things on FreeBSD, you don't have to. :)
--
Freddie Cash
fjwc...@gmail.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
instead of one giant raidz vdev),
copy the data back.
There's no other way.
--
Freddie Cash
fjwc...@gmail.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
large raidz vdev.
--
Freddie Cash
fjwc...@gmail.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
, use 1x 4-drive raidz1.
Note: newegg.ca has a sale on right now. WD Caviar Black 1 TB drives
are only $85 CDN.
--
Freddie Cash
fjwc...@gmail.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs
.
--
Freddie Cash
fjwc...@gmail.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
of thumb for ZFS is 2 GB of RAM as a bare minimum,
using the 64-bit version of FreeBSD. The sweet spot is 4 GB of RAM.
But, more is always better.
--
Freddie Cash
fjwc...@gmail.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
from within FreeBSD.
--
Freddie Cash
fjwc...@gmail.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
controller).
--
Freddie Cash
fjwc...@gmail.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
on (this may not be perfectly
correct, going from memory):
zpool attach poolname disk05 disk01
zpool detach poolname disk01
Carry on with the add and replace methods as needed until you have
your 6-mirror pool.
No vdev removals required.
--
Freddie Cash
fjwc...@gmail.com
to 3Ware 9550SXU and 9650SE RAID controllers, configured as
Single Drive arrays.
There's also 8 WD Caviar Green 1.5 TB drives in there, which are not
very good (even after twiddling the idle timeout setting via wdidle3).
Definitely avoid the Green/GP line of drives.
--
Freddie Cash
fjwc
On Sat, Jun 26, 2010 at 12:20 AM, Ben Miles merloc...@hotmail.com wrote:
What supporting applications are there on Ubuntu for RAIDZ?
None. Ubuntu doesn't officially support ZFS.
You can kind of make it work using the ZFS-FUSE project. But it's not
stable, nor recommended.
--
Freddie Cash
for ZFSv15 and ZFSv16. You'll
get a more stable, better performant system than trying to shoehorn
ZFS-FUSE into Ubuntu (we've tried with Debian, and ZFS-FUSE is good
for short-term testing, but not production use).
--
Freddie Cash
fjwc...@gmail.com
of as few physical
disks as possible (for your size and redundancy requirements), and
your pool to be made up of as many vdevs as possible.
--
Freddie Cash
fjwc...@gmail.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
, etc).
You can add vdevs to the pool at anytime.
You cannot expand a raidz vdev by adding drives, though (convert a 4-drive
raidz1 to a 5-drive raidz1). Nor can you convert between raidz types
(4-drive raidz1 to a 6-drive raidz2).
--
Freddie Cash
fjwc...@gmail.com
then call
from userland. Which is essentially what the ZFS FUSE folks have been
reduced to doing.
The nvidia shim is only needed to be able to ship the non-GPL binary driver
with the GPL binary kernel. If you don't use the binaries, you don't use
the shim.
--
Freddie Cash
fjwc...@gmail.com
On Fri, Jun 11, 2010 at 12:25 PM, Bob Friesenhahn
bfrie...@simple.dallas.tx.us wrote:
On Fri, 11 Jun 2010, Freddie Cash wrote:
For the record, the following paragraph was incorrectly quoted by Bob. This
paragraph was originally written by Erik Trimble:
I don't mean to be a PITA, but I'm
everything super simple and easy for them
... and a royal pain for everyone else (kinda like Windows). :)
In the end, it all comes down to user education.
--
Freddie Cash
fjwc...@gmail.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
, read the man page. :)
zpool iostat -v
--
Freddie Cash
fjwc...@gmail.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
available output of various tools (like zfs
list, df, etc).
--
Freddie Cash
fjwc...@gmail.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
vdev, by
replacing each drive in the raidz vdev with a larger drive. We just did
this, going from 8x 500 GB drives in a raidz2 vdev, to 8x 1.5 TB drives in a
raidz2 vdev.
--
Freddie Cash
fjwc...@gmail.com
___
zfs-discuss mailing list
zfs-discuss
for the space to become available).
We've used both of the above quite successfully, both at home and at work.
Not sure what your buddy was talking about. :)
--
Freddie Cash
fjwc...@gmail.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
'
You forgot to list which property to get. See the command that you quoted.
:)
--
Freddie Cash
fjwc...@gmail.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Fri, May 21, 2010 at 10:59 AM, Brandon High bh...@freaks.com wrote:
On Fri, May 21, 2010 at 7:12 AM, David Dyer-Bennet d...@dd-b.net wrote:
On Thu, May 20, 2010 19:44, Freddie Cash wrote:
And you can always patch OpenSSH with HPN, thus enabling the NONE
cipher,
which disable
to improve
transfer rates, especially on 100 Mbps or faster links.
--
Freddie Cash
fjwc...@gmail.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
be thrashing 12 drives.
--
Freddie Cash
fjwc...@gmail.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
, the WD Greens may be okay.
For anything else, they're crap. Plain and simple.
--
Freddie Cash
fjwc...@gmail.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
in the raidz2 vdev). Performance has improved
slightly, though.
--
Freddie Cash
fjwc...@gmail.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
. :) And the drives will last longer than
3 months or so. (If they've removed the download for wdidle3, I have a copy
here.)
--
Freddie Cash
fjwc...@gmail.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
of our 500 GB drives in our storage
servers are WD RE Black drives. All of our 400 GB drives are Seagate
E-something drives. No complaints about those. But, they're enterprise,
RAID-qualified drives.
--
Freddie Cash
fjwc...@gmail.com
___
zfs-discuss
1 - 100 of 136 matches
Mail list logo