.
If that -- ignoring cache flush requests -- is the whole reason why
SSDs are so fast, I'm glad I haven't got one yet.
They're fast for random reads and writes because they don't have seek
latency. They're fast for sequential IO because they aren't limited by
spindle speed.
--
Brandon High : bh
On Mon, Jul 30, 2012 at 7:11 AM, GREGG WONDERLY gregg...@gmail.com wrote:
I thought I understood that copies would not be on the same disk, I guess I
need to go read up on this again.
ZFS attempts to put copies on separate devices, but there's no guarantee.
-B
--
Brandon High : bh
-eV', it should have some (rather
extensive) information.
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
fixes and new features added between
snv_117 and snv_134 (the last OpenSolaris release). It might be worth
updating to snv_134 at the very least.
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
of thing can mean interference between
some combination of multiple send/receives at the same time, on the
same filesystem?
Look at 'zfs hold', 'zfs holds', and 'zfs release'. Sends and receives
will place holds on snapshots to prevent them from being changed.
-B
--
Brandon High : bh
been using 8 x 3TB 5k3000 in a raidz2 for about a year without issue.
The Deskstar 3TB come off the same production line as the Ultrastar
5k3000. I would avoid the 2TB and smaler 5k3000 - They come off a
separate production line.
-B
--
Brandon High : bh...@freaks.com
the 7K3000 and 5K3000 drives have 512B physical sectors.
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
it's a somewhat important decision.
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
it might be done from a
shell prompt.
rm ./-c ./-O ./-k
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
as a cache device with the Z68 chipset.
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
are
not manufactured on the same line as the Ultrastar and seem to have
lower reliability. Only the 3TB 5k3000 shares specs with the Ultrastar
5k3000.
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
. The 100GB Intel 710 costs ~ $650.
The 311 is a good choice for home or budget users, and it seems that
the 710 is much bigger than it needs to be for slog devices.
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss
it to a startup script.
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
times.
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
to 80%).
Intel recently added the 311, a small SLC-based drive for use as a
temp cache with their Z68 platform. It's limited to 20GB, but it might
be a better fit for use as a ZIL than the 320.
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss
-the-solaris-device-tree
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
. You can create
another vdev to add to your pool though.
If you're adding another vdev, it should have the same geometry as
your current (ie: 4 drives). The zpool command will complain if you
try to add a vdev with different geometry or redundancy, though you
can force it with -f.
-B
--
Brandon
if this was involved here.
Using dedup on a pool that houses an Oracle DB is Doing It Wrong in so
many ways...
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs
is a real issue with pools that are (or have
been) very full. The data gets written out in fragments and has to be
read back in the same order.
If the mythical bp_rewrite code ever shows up, it will be possible to
defrag a pool. But not yet.
-B
--
Brandon High : bh...@freaks.com
gives hints to the garbage collector that
sectors are no longer in use. When the GC runs, it can find more flash
blocks more easily that aren't used or combine several mostly-empty
blocks and erase or otherwise free them for reuse later.
-B
--
Brandon High : bh...@freaks.com
the pool.
You can also use the Live CD or Live USB to access your pool or possibly fix
your existing installation.
You will have to force the zpool import with either a reinstall or a Live
boot.
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss
bandwidth from your main storage
pools than from the cache devices.
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
the replacement drive.
Since you've physically replaced the drive, you should just have to do:
# zpool replace tank c10t0d0
The pool should resilver, and I think the spare should automatically
detach. If not
# zpool remove tank c10t6d0
should take care of it.
-B
--
Brandon High : bh...@freaks.com
Wildfire,
etc).
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
be able to issue new certificates for public products.
Please try again later
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
. But it would be a really bad idea.
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
fine. This card uses the same Marvell
controller as the x4500.
Performance is fine if not slightly better than the WD10EADS drives
that I replaced. Of course, the pool was about 92% full with the
smaller drives ...
-B
--
Brandon High : bh...@freaks.com
are met.
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
it even once.
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
) should be fine until the volume gets very full.
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
tried to use 2TB drives on
an Atom N270-based board and they were not recognized, but they worked
fine under FreeBSD.
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
SAS device, or with SATA drives. A single
port cable can be used with a single- or dual-ported SAS device
(although it will only use one port) or with a SATA drive. A SATA
cable can be used with a SATA device.
-B
--
Brandon High : bh...@freaks.com
, and the conclusion is that it
would require bp_rewrite.
Offline (or deferred) dedup certainly seems more attractive given the
current real-time performance.
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss
with an 8-drive raidz2,
though my usage is fairly light. The system is more than fast enough
to saturate gigabit ethernet for sequential reads and writes. My
drives were WD10EADS Green drives.
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss
On Tue, May 24, 2011 at 12:41 PM, Richard Elling
richard.ell...@gmail.com wrote:
There are many ZFS implementations, each evolving as the contributors desire.
Diversity and innovation is a good thing.
... unless Oracle's zpool v30 is different than Nexenta's v30.
-B
--
Brandon High : bh
Richard would probably know for certain.
There will probably be a fork at some point to an OSS ZFS and an
Oracle ZFS. Hopefully neither side will actively try to break
compatibility.
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs
]-b_flags B_WRITE ? W : R),
args[0]-b_bcount
);
}
For every completed IO, this should give you the timestamp, device
name, start LBA, Read or Write and length of the IO.
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
feed it the output of 'lspci -vv -n'.
You may have to disable some on-board devices to get through the
installer, but I couldn't begin to guess which.
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
On Tue, May 17, 2011 at 11:10 AM, Hung-ShengTsao (Lao Tsao) Ph.D.
laot...@gmail.com wrote:
may be do
zpool import -R /a rpool
'zpool import -N' may work as well.
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss
data disks that were a power of two was still
recommended, due to the way that ZFS splits records/blocks in a raidz
vdev. Or are you responding to some other point?
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss
.
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
environments.
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
.
What's most frustrating is that this is the third time I've built this
pool due to corruption like this, within three months. :(
You may have an underlying hardware problem, or there could be a bug
in the FreeBSD implementation that you're tripping over.
-B
--
Brandon High : bh
for read workloads.
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
..
It wasn't that long ago when 66MB/s ATA was considered a waste because
no drive could use that much bandwidth. These days a slow drive has
max throughput greater than 110MB/s.
(OK, looking at some online reviews, it was about 13 years ago. Maybe
I'm just old.)
-B
--
Brandon High : bh...@freaks.com
but sometimes very different implementations.
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
. This could also be why the full sends
perform better than incremental sends.
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
. As with ext4, block alignment
is determined by partitioning and slices.
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
smaller. You'll have to worry about the guests' block
alignment in the context of the image file, since two identical files
may not create identical blocks as seen from ZFS. This means you may
get only fractional savings and have an enormous DDT.
-B
--
Brandon High : bh...@freaks.com
limitations, and it sucks when you hit them.
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
the send is stalled. You will have to fiddle with the buffer size
and other options to tune it for your use.
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs
better off cloning datasets that contain an
unconfigured install and customizing from there?
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
don't need to specify --whole-file, it's implied when copying on
the same system. --inplace can play badly with hard links and
shouldn't be used.
It probably will be slower than other options but it may be more
accurate, especially with -H
-B
--
Brandon High : bh...@freaks.com
, since files on both side
need to be read and checksummed.
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
.
NTFS supports sparse files.
http://www.flexhex.com/docs/articles/sparse-files.phtml
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
you can do about it short of deleting
datasets and/or snapshots.
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
for non-dedup
datasets, and is in fact the default.
As an aside: Erik, any idea when the 159 bits will make it to the public?
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org
.
You will probably want to set it back to default after you're done.
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Fri, Apr 29, 2011 at 1:23 PM, Freddie Cash fjwc...@gmail.com wrote:
Running ZFSv28 on 64-bit FreeBSD 8-STABLE.
I'd suggest trying to import the pool into snv_151a (Solaris 11
Express), which is the reference and development platform for ZFS.
-B
--
Brandon High : bh...@freaks.com
] sha256 uncompressed LE contiguous unique
unencrypted 1-copy size=2L/2P birth=236799L/236799P fill=1
cksum=55c9f21af6399be:11f9d4f5ff4cb109:2af8b798671e47ba:d19caf78da295df5
How can I translate this into datasets or files?
-B
--
Brandon High : bh...@freaks.com
forces sha256.
The default checksum used for deduplication is sha256 (subject to
change). When dedup is enabled, the dedup checksum algorithm overrides
the checksum property.
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Thu, Apr 28, 2011 at 3:48 PM, Ian Collins i...@ianshome.com wrote:
Dedup is at the block, not file level.
Files are usually composed of blocks.
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
from?
Since I have some datasets with dedup'd data, I'm a little paranoid
about tanking the system if they are destroyed.
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
On Wed, Apr 27, 2011 at 12:51 PM, Lamp Zy lam...@gmail.com wrote:
Any ideas how to identify which drive is the one that failed so I can
replace it?
Try the following:
# fmdump -eV
# fmadm faulty
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss
, but at 13 hours in, the resilver has been
managing ~ 100M/s and is 70% done.
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
0.0 0.0 0.00.00.0 0 0 c0t0d0
0.00.00.00.0 0.0 0.00.00.0 0 0 c0t1d0
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs
also be referred to by its shortened
column name, volblock.
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
1 1 c0t1d0
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
with the first spare. (I'd suggest
verifying the device names before running it.)
# zpool replace fwgpool0 c4t5000C5001128FE4Dd0 c4t5000C50014D70072d0
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
:1.
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Mon, Apr 25, 2011 at 5:26 PM, Brandon High bh...@freaks.com wrote:
Setting zfs_resilver_delay seems to have helped some, based on the
iostat output. Are there other tunables?
I found zfs_resilver_min_time_ms while looking. I've tried bumping it
up considerably, without much change.
'zpool
and seem
to hang until it's completed.
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
, I
suspect from the constant writes.)
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Solaris or Solaris 11 Express may
complete it faster.
Any tips greatly appreciated,
Just wait...
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
to hear that there's a new feature being worked on,
rather than the radio silence we've had.
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
?
Yes you can do it, no it is not recommended.
I had a need to do something similar to what you're attempting and
ended up using a Live CD (which doesn't have an rpool to have a naming
conflict) to do the manipulations.
-B
--
Brandon High : bh...@freaks.com
this, however.
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
version.
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
redundancy on its zfs storage, and not just multiple vdsk on the same
host disk / lun. Either give it access to the raw devices, or use
iSCSI, or create your vdsk on different luns and raidz them, etc.
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss
, but it certainly won't hurt.
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
. So you will need to create a new VM store after the
recordsize is tuned.
You can change the recordsize and copy the vmdk files on the nfs
server, which will re-write them with a smaller recordsize.
-B
--
Brandon High : bh...@freaks.com
___
zfs
a nice safety
net to have.
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
.
2.) Why do we see 4MB-8MB/s of *writes* to the filesystem when we do a
'zfs send' to /dev/null ?
Is anything else using the filesystems in the pool?
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
On Sun, Feb 27, 2011 at 7:35 PM, Brandon High bh...@freaks.com wrote:
It moves from best fit to any fit at a certain point, which is at
~ 95% (I think). Best fit looks for a large contiguous space to avoid
fragmentation while any fit looks for any free space.
I got the terminology wrong, it's
4 drives, with an expansion
slot for an additional controller. I think some people have reported
success with these on the list.
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org
power and lower minimum receive power. An internal power
might work with a SATA to eSATA cable or adapter, but it's not
guaranteed to.
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
from best fit to any fit at a certain point, which is at
~ 95% (I think). Best fit looks for a large contiguous space to avoid
fragmentation while any fit looks for any free space.
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs
they exist.
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
CPU time.
What about an inexpensive SAS card (eg: Supermicro AOC-USAS-L4i) and
external SAS enclosure (eg: Sans Digital TowerRAID TR4X). It would
cost about $350 for the setup.
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs
assertion doesn't seem to hold up.
I think he meant that if one drive in a mirror dies completely, then
any single read error on the remaining drive is not recoverable.
With raidz2 (or a 3-way mirror for that matter), if one drive dies
completely, you still have redundancy.
-B
--
Brandon High
not recommended to use different levels of redundancy in a pool,
so you may want to consider using mirrors for everything. This also
makes it easier to add or upgrade capacity later.
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss
different beast than UFS and doesn't require the same tuning.
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
are being cached, because
any data that is written synchronously will be committed to stable
storage before the write returns.
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org
is less likely with the lower density?
More platters leads to more heat and higher power consumption. Most
drives are 3 or 4 platters, though Hitachi usually manufactures 5
platter drives as well.
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss
the current batch of 3TB drives are 7200 RPM with 5 platters
and 667GB per platter or 5400 RPM with 4 platters at 750GB/platter.
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org
On Sat, Jan 29, 2011 at 8:31 AM, Edward Ned Harvey
opensolarisisdeadlongliveopensola...@nedharvey.com wrote:
What is the status of ZFS support for TRIM?
I believe it's been supported for a while now.
http://www.c0t0d0s0.org/archives/6792-SATA-TRIM-support-in-Opensolaris.html
-B
--
Brandon
, and I've found them to all be about the same.
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
RAID arrays at it.
Off the top of my head, I can think of 3 sources: LSI, Dell and Supermicro.
LSI sells the 620J and 630J. I believe these are what Dell re-labels
as the M1000.
Supermicro makes server chassis and sells JBOD kits.
There are many more, if you take time to look.
-B
--
Brandon
a
second disk might destroy your data.
With raidz2, you can lose any 2 disks, but you pay for it with
somewhat lower performance.
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org
1 - 100 of 455 matches
Mail list logo