Re: [zfs-discuss] [osol-discuss] zfs send/receive?

2010-09-26 Thread Casper . Dik
hi all I'm using a custom snaopshot scheme which snapshots every hour, day, week and month, rotating 24h, 7d, 4w and so on. What would be the best way to zfs send/receive these things? I'm a little confused about how this works for delta udpates... Vennlige hilsener / Best regards The

Re: [zfs-discuss] non-ECC Systems and ZFS for home users

2010-09-26 Thread Alex Blewitt
On 25 Sep 2010, at 19:56, Giovanni Tirloni gtirl...@sysdroid.com wrote: We have correctable memory errors on ECC systems on a monthly basis. It's not if they'll happen but how often. DRAM Errors in the wild: a large-scale field study is worth a read if you have time.

Re: [zfs-discuss] [osol-discuss] zfs send/receive?

2010-09-26 Thread Edward Ned Harvey
From: Richard Elling [mailto:richard.ell...@gmail.com] It is relatively easy to find the latest, common snapshot on two file systems. Once you know the latest, common snapshot, you can send the incrementals up to the latest. I've always relied on the snapshot names matching. Is there a

[zfs-discuss] fs root inode number?

2010-09-26 Thread Richard L. Hamilton
Typically on most filesystems, the inode number of the root directory of the filesystem is 2, 0 being unused and 1 historically once invisible and used for bad blocks (no longer done, but kept reserved so as not to invalidate assumptions implicit in ufsdump tapes). However, my observation seems

Re: [zfs-discuss] fs root inode number?

2010-09-26 Thread Casper . Dik
Typically on most filesystems, the inode number of the root directory of the filesystem is 2, 0 being unused and 1 historically once invisible and used for bad blocks (no longer done, but kept reserved so as not to invalidate assumptions implicit in ufsdump tapes). However, my observation seems

Re: [zfs-discuss] fs root inode number?

2010-09-26 Thread Joerg Schilling
Richard L. Hamilton rlha...@smart.net wrote: Typically on most filesystems, the inode number of the root directory of the filesystem is 2, 0 being unused and 1 historically once invisible and used for bad blocks (no longer done, but kept reserved so as not to invalidate assumptions implicit

Re: [zfs-discuss] fs root inode number?

2010-09-26 Thread Andrew Gabriel
Richard L. Hamilton wrote: Typically on most filesystems, the inode number of the root directory of the filesystem is 2, 0 being unused and 1 historically once invisible and used for bad blocks (no longer done, but kept reserved so as not to invalidate assumptions implicit in ufsdump

[zfs-discuss] Long resilver time

2010-09-26 Thread Jason J. W. Williams
I just witnessed a resilver that took 4h for 27gb of data. Setup is 3x raid-z2 stripes with 6 disks per raid-z2. Disks are 500gb in size. No checksum errors. It seems like an exorbitantly long time. The other 5 disks in the stripe with the replaced disk were at 90% busy and ~150io/s each

Re: [zfs-discuss] non-ECC Systems and ZFS for home users (was: Please warn a home user against OpenSolaris under VirtualBox under WinXP ; ))

2010-09-26 Thread devsk
On 9/23/2010 at 12:38 PM Erik Trimble wrote: | [snip] |If you don't really care about ultra-low-power, then there's absolutely |no excuse not to buy a USED server-class machine which is 1- or 2- |generations back. They're dirt cheap, readily available, | [snip] =

Re: [zfs-discuss] [osol-discuss] zfs send/receive?

2010-09-26 Thread Richard Elling
On Sep 26, 2010, at 4:41 AM, Edward Ned Harvey sh...@nedharvey.com wrote: From: Richard Elling [mailto:richard.ell...@gmail.com] It is relatively easy to find the latest, common snapshot on two file systems. Once you know the latest, common snapshot, you can send the incrementals up to

Re: [zfs-discuss] Long resilver time

2010-09-26 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Jason J. W. Williams I just witnessed a resilver that took 4h for 27gb of data. Setup is 3x raid-z2 stripes with 6 disks per raid-z2. Disks are 500gb in size. No checksum errors. 27G on a

Re: [zfs-discuss] Long resilver time

2010-09-26 Thread Roy Sigurd Karlsbakk
- Original Message - I just witnessed a resilver that took 4h for 27gb of data. Setup is 3x raid-z2 stripes with 6 disks per raid-z2. Disks are 500gb in size. No checksum errors. It seems like an exorbitantly long time. The other 5 disks in the stripe with the replaced disk were

Re: [zfs-discuss] Long resilver time

2010-09-26 Thread Bob Friesenhahn
On Sun, 26 Sep 2010, Edward Ned Harvey wrote: 27G on a 6-disk raidz2 means approx 6.75G per disk. Ideally, the disk could write 7G = 56 Gbit in a couple minutes if it were all sequential and no other activity in the system. So you're right to suspect something is suboptimal, but the root

Re: [zfs-discuss] non-ECC Systems and ZFS for home users (was: Please warn a home user against OpenSolaris under VirtualBox under WinXP ; ))

2010-09-26 Thread Erik Trimble
On 9/26/2010 8:06 AM, devsk wrote: On 9/23/2010 at 12:38 PM Erik Trimble wrote: | [snip] |If you don't really care about ultra-low-power, then there's absolutely |no excuse not to buy a USED server-class machine which is 1- or 2- |generations back. They're dirt cheap, readily available, |

Re: [zfs-discuss] Long resilver time

2010-09-26 Thread Jason J. W. Williams
Upgrading is definitely an option. What is the current snv favorite for ZFS stability? I apologize, with all the Oracle/Sun changes I haven't been paying as close attention to big reports on zfs-discuss as I used to. -J Sent via iPhone Is your e-mail Premiere? On Sep 26, 2010, at 10:22, Roy

Re: [zfs-discuss] Long resilver time

2010-09-26 Thread Richard Elling
On Sep 26, 2010, at 11:03 AM, Jason J. W. Williams wrote: Upgrading is definitely an option. What is the current snv favorite for ZFS stability? I apologize, with all the Oracle/Sun changes I haven't been paying as close attention to big reports on zfs-discuss as I used to. OpenIndiana b147

Re: [zfs-discuss] Long resilver time

2010-09-26 Thread Roy Sigurd Karlsbakk
Upgrading is definitely an option. What is the current snv favorite for ZFS stability? I apologize, with all the Oracle/Sun changes I haven't been paying as close attention to big reports on zfs-discuss as I used to. OpenIndiana b147 is the latest binary release, but it also includes

Re: [zfs-discuss] Long resilver time

2010-09-26 Thread Richard Elling
On Sep 26, 2010, at 1:16 PM, Roy Sigurd Karlsbakk wrote: Upgrading is definitely an option. What is the current snv favorite for ZFS stability? I apologize, with all the Oracle/Sun changes I haven't been paying as close attention to big reports on zfs-discuss as I used to. OpenIndiana b147