Re: [zfs-discuss] Supermicro AOC-USAS2-L8i

2010-10-17 Thread Orvar Korvar
Does it support 3TB drives? -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Finding corrupted files

2010-10-17 Thread Orvar Korvar
budy, here are some links. Remember, the reason you get corrupted files, is because ZFS detects it. Probably, you got corruption earlier as well, but your hardware did not notice it. This is called Silent Corruption. But ZFS is designed to detect and correct Silent Corruption. Which no normal

Re: [zfs-discuss] Finding corrupted files

2010-10-17 Thread Kees Nuyt
On Sun, 17 Oct 2010 03:05:34 PDT, Orvar Korvar knatte_fnatte_tja...@yahoo.com wrote: here are some links. Wow, that's a great overview, thanks! -- ( Kees Nuyt ) c[_] ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] Finding corrupted files

2010-10-17 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Edward Ned Harvey If scrub is operating at a block-level (and I think it is), then how can checksum failures be mapped to file names? For example, this is a long-requested feature of zfs

[zfs-discuss] RaidzN blocksize ... or blocksize in general ... and resilver

2010-10-17 Thread Edward Ned Harvey
The default blocksize is 128K. If you are using mirrors, then each block on disk will be 128K whenever possible. But if you're using raidzN with a capacity of M disks (M disks useful capacity + N disks redundancy) then the block size on each individual disk will be 128K / M. Right? This is one

Re: [zfs-discuss] ZFS cache inconsistencies with Oracle

2010-10-17 Thread Bob Friesenhahn
On Fri, 15 Oct 2010, Gerry Bragg wrote: Is it possible for a read to bypass the write cache and fetch from disk before the flush of the cache to disk occurs? No. Zfs is fully coherent in memory. On a server, most accesses are to the data in memory rather than from disk. Bob -- Bob

Re: [zfs-discuss] RaidzN blocksize ... or blocksize in general ... and resilver

2010-10-17 Thread Bob Friesenhahn
On Sun, 17 Oct 2010, Edward Ned Harvey wrote: The default blocksize is 128K.  If you are using mirrors, then each block on disk will be 128K whenever possible.  But if you're using raidzN with a capacity of M disks (M disks useful capacity + N disks redundancy) then the block size on each

Re: [zfs-discuss] RaidzN blocksize ... or blocksize in general ... and resilver

2010-10-17 Thread Kyle McDonald
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On 10/17/2010 9:38 AM, Edward Ned Harvey wrote: The default blocksize is 128K. If you are using mirrors, then each block on disk will be 128K whenever possible. But if you're using raidzN with a capacity of M disks (M disks useful capacity +

[zfs-discuss] vdev failure - pool loss ?

2010-10-17 Thread Simon Breden
I would just like to confirm or not whether a vdev failure would lead to failure of the whole pool or not. For example, if I created a pool from two RAID-Z2 vdevs, and three drives fail within the first vdev, is all the data within the whole pool unrecoverable? -- This message posted from

Re: [zfs-discuss] vdev failure - pool loss ?

2010-10-17 Thread Ian Collins
On 10/18/10 06:28 AM, Simon Breden wrote: I would just like to confirm or not whether a vdev failure would lead to failure of the whole pool or not. For example, if I created a pool from two RAID-Z2 vdevs, and three drives fail within the first vdev, is all the data within the whole pool

Re: [zfs-discuss] vdev failure - pool loss ?

2010-10-17 Thread Simon Breden
OK, thanks Ian. Another example: Would you lose all pool data if you had two vdevs: (1) a RAID-Z2 vdev and (2) a two drive mirror vdev, and three drives in the RAID-Z2 vdev failed? -- This message posted from opensolaris.org ___ zfs-discuss mailing

Re: [zfs-discuss] vdev failure - pool loss ?

2010-10-17 Thread Freddie Cash
On Sun, Oct 17, 2010 at 12:31 PM, Simon Breden sbre...@gmail.com wrote: OK, thanks Ian. Another example: Would you lose all pool data if you had two vdevs: (1) a RAID-Z2 vdev and (2) a two drive mirror vdev, and three drives in the RAID-Z2 vdev failed? If you lose 1 vdev, you lose the

Re: [zfs-discuss] RaidzN blocksize ... or blocksize in general ... and resilver

2010-10-17 Thread Richard Elling
On Oct 17, 2010, at 6:38 AM, Edward Ned Harvey wrote: The default blocksize is 128K. If you are using mirrors, then each block on disk will be 128K whenever possible. But if you're using raidzN with a capacity of M disks (M disks useful capacity + N disks redundancy) then the block size

Re: [zfs-discuss] Finding corrupted files

2010-10-17 Thread Richard Elling
On Oct 17, 2010, at 6:17 AM, Edward Ned Harvey wrote: From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Edward Ned Harvey If scrub is operating at a block-level (and I think it is), then how can checksum failures be mapped to file names?

Re: [zfs-discuss] adding new disks and setting up a raidz2

2010-10-17 Thread Richard Elling
On Oct 16, 2010, at 9:48 PM, Derek G Nokes wrote: I tried using format to format the drive and got the following: Ready to format. Formatting cannot be interrupted and takes 5724 minutes (estimated). Continue? y Beginning format. The current time is Sat Oct 16 23:58:17 2010 Formatting...

Re: [zfs-discuss] Optimal raidz3 configuration

2010-10-17 Thread Richard Elling
On Oct 16, 2010, at 4:57 AM, Edward Ned Harvey wrote: From: Bob Friesenhahn [mailto:bfrie...@simple.dallas.tx.us] raidzN takes a really long time to resilver (code written inefficiently, it's a known problem.) If you had a huge raidz3, it would literally never finish, because it couldn't