Does it support 3TB drives?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
budy,
here are some links. Remember, the reason you get corrupted files, is because
ZFS detects it. Probably, you got corruption earlier as well, but your hardware
did not notice it. This is called Silent Corruption. But ZFS is designed to
detect and correct Silent Corruption. Which no normal
On Sun, 17 Oct 2010 03:05:34 PDT, Orvar Korvar
knatte_fnatte_tja...@yahoo.com wrote:
here are some links.
Wow, that's a great overview, thanks!
--
( Kees Nuyt
)
c[_]
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
If scrub is operating at a block-level (and I think it is), then how
can
checksum failures be mapped to file names? For example, this is a
long-requested feature of zfs
The default blocksize is 128K. If you are using mirrors, then each block on
disk will be 128K whenever possible. But if you're using raidzN with a
capacity of M disks (M disks useful capacity + N disks redundancy) then the
block size on each individual disk will be 128K / M. Right? This is one
On Fri, 15 Oct 2010, Gerry Bragg wrote:
Is it possible for a read to bypass the write cache and fetch from
disk before the flush of the cache to disk occurs?
No. Zfs is fully coherent in memory. On a server, most accesses are
to the data in memory rather than from disk.
Bob
--
Bob
On Sun, 17 Oct 2010, Edward Ned Harvey wrote:
The default blocksize is 128K. If you are using mirrors, then each
block on disk will be 128K whenever possible. But if you're using
raidzN with a capacity of M disks (M disks useful capacity + N disks
redundancy) then the block size on each
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 10/17/2010 9:38 AM, Edward Ned Harvey wrote:
The default blocksize is 128K. If you are using mirrors, then
each block on disk will be 128K whenever possible. But if you're
using raidzN with a capacity of M disks (M disks useful capacity +
I would just like to confirm or not whether a vdev failure would lead to
failure of the whole pool or not.
For example, if I created a pool from two RAID-Z2 vdevs, and three drives fail
within the first vdev, is all the data within the whole pool unrecoverable?
--
This message posted from
On 10/18/10 06:28 AM, Simon Breden wrote:
I would just like to confirm or not whether a vdev failure would lead to
failure of the whole pool or not.
For example, if I created a pool from two RAID-Z2 vdevs, and three drives fail
within the first vdev, is all the data within the whole pool
OK, thanks Ian.
Another example:
Would you lose all pool data if you had two vdevs: (1) a RAID-Z2 vdev and (2) a
two drive mirror vdev, and three drives in the RAID-Z2 vdev failed?
--
This message posted from opensolaris.org
___
zfs-discuss mailing
On Sun, Oct 17, 2010 at 12:31 PM, Simon Breden sbre...@gmail.com wrote:
OK, thanks Ian.
Another example:
Would you lose all pool data if you had two vdevs: (1) a RAID-Z2 vdev and (2)
a two drive mirror vdev, and three drives in the RAID-Z2 vdev failed?
If you lose 1 vdev, you lose the
On Oct 17, 2010, at 6:38 AM, Edward Ned Harvey wrote:
The default blocksize is 128K. If you are using mirrors, then each block on
disk will be 128K whenever possible. But if you're using raidzN with a
capacity of M disks (M disks useful capacity + N disks redundancy) then the
block size
On Oct 17, 2010, at 6:17 AM, Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
If scrub is operating at a block-level (and I think it is), then how
can
checksum failures be mapped to file names?
On Oct 16, 2010, at 9:48 PM, Derek G Nokes wrote:
I tried using format to format the drive and got the following:
Ready to format. Formatting cannot be interrupted
and takes 5724 minutes (estimated). Continue? y
Beginning format. The current time is Sat Oct 16 23:58:17 2010
Formatting...
On Oct 16, 2010, at 4:57 AM, Edward Ned Harvey wrote:
From: Bob Friesenhahn [mailto:bfrie...@simple.dallas.tx.us]
raidzN takes a really long time to resilver (code written
inefficiently,
it's a known problem.) If you had a huge raidz3, it would literally
never
finish, because it couldn't
16 matches
Mail list logo