hi all
I'm using a custom snaopshot scheme which snapshots every hour, day,
week and month, rotating 24h, 7d, 4w and so on. What would be the best
way to zfs send/receive these things? I'm a little confused about how
this works for delta udpates...
Vennlige hilsener / Best regards
The
On 25 Sep 2010, at 19:56, Giovanni Tirloni gtirl...@sysdroid.com wrote:
We have correctable memory errors on ECC systems on a monthly basis. It's not
if they'll happen but how often.
DRAM Errors in the wild: a large-scale field study is worth a read if you
have time.
From: Richard Elling [mailto:richard.ell...@gmail.com]
It is relatively easy to find the latest, common snapshot on two file
systems.
Once you know the latest, common snapshot, you can send the
incrementals
up to the latest.
I've always relied on the snapshot names matching. Is there a
Typically on most filesystems, the inode number of the root
directory of the filesystem is 2, 0 being unused and 1 historically
once invisible and used for bad blocks (no longer done, but kept
reserved so as not to invalidate assumptions implicit in ufsdump tapes).
However, my observation seems
Typically on most filesystems, the inode number of the root
directory of the filesystem is 2, 0 being unused and 1 historically
once invisible and used for bad blocks (no longer done, but kept
reserved so as not to invalidate assumptions implicit in ufsdump tapes).
However, my observation seems
Richard L. Hamilton rlha...@smart.net wrote:
Typically on most filesystems, the inode number of the root
directory of the filesystem is 2, 0 being unused and 1 historically
once invisible and used for bad blocks (no longer done, but kept
reserved so as not to invalidate assumptions implicit
Richard L. Hamilton wrote:
Typically on most filesystems, the inode number of the root
directory of the filesystem is 2, 0 being unused and 1 historically
once invisible and used for bad blocks (no longer done, but kept
reserved so as not to invalidate assumptions implicit in ufsdump
I just witnessed a resilver that took 4h for 27gb of data. Setup is 3x raid-z2
stripes with 6 disks per raid-z2. Disks are 500gb in size. No checksum errors.
It seems like an exorbitantly long time. The other 5 disks in the stripe with
the replaced disk were at 90% busy and ~150io/s each
On 9/23/2010 at 12:38 PM Erik Trimble wrote:
| [snip]
|If you don't really care about ultra-low-power, then
there's
absolutely
|no excuse not to buy a USED server-class machine
which is 1- or 2-
|generations back. They're dirt cheap, readily
available,
| [snip]
=
On Sep 26, 2010, at 4:41 AM, Edward Ned Harvey sh...@nedharvey.com wrote:
From: Richard Elling [mailto:richard.ell...@gmail.com]
It is relatively easy to find the latest, common snapshot on two file
systems.
Once you know the latest, common snapshot, you can send the
incrementals
up to
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Jason J. W. Williams
I just witnessed a resilver that took 4h for 27gb of data. Setup is 3x
raid-z2 stripes with 6 disks per raid-z2. Disks are 500gb in size. No
checksum errors.
27G on a
- Original Message -
I just witnessed a resilver that took 4h for 27gb of data. Setup is 3x raid-z2
stripes with 6 disks per raid-z2. Disks are 500gb in size. No checksum errors.
It seems like an exorbitantly long time. The other 5 disks in the stripe with
the replaced disk were
On Sun, 26 Sep 2010, Edward Ned Harvey wrote:
27G on a 6-disk raidz2 means approx 6.75G per disk. Ideally, the
disk could write 7G = 56 Gbit in a couple minutes if it were all
sequential and no other activity in the system. So you're right to
suspect something is suboptimal, but the root
On 9/26/2010 8:06 AM, devsk wrote:
On 9/23/2010 at 12:38 PM Erik Trimble wrote:
| [snip]
|If you don't really care about ultra-low-power, then
there's
absolutely
|no excuse not to buy a USED server-class machine
which is 1- or 2-
|generations back. They're dirt cheap, readily
available,
|
Upgrading is definitely an option. What is the current snv favorite for ZFS
stability? I apologize, with all the Oracle/Sun changes I haven't been paying
as close attention to big reports on zfs-discuss as I used to.
-J
Sent via iPhone
Is your e-mail Premiere?
On Sep 26, 2010, at 10:22, Roy
On Sep 26, 2010, at 11:03 AM, Jason J. W. Williams wrote:
Upgrading is definitely an option. What is the current snv favorite for ZFS
stability? I apologize, with all the Oracle/Sun changes I haven't been paying
as close attention to big reports on zfs-discuss as I used to.
OpenIndiana b147
Upgrading is definitely an option. What is the current snv favorite
for ZFS stability? I apologize, with all the Oracle/Sun changes I
haven't been paying as close attention to big reports on zfs-discuss
as I used to.
OpenIndiana b147 is the latest binary release, but it also includes
On Sep 26, 2010, at 1:16 PM, Roy Sigurd Karlsbakk wrote:
Upgrading is definitely an option. What is the current snv favorite
for ZFS stability? I apologize, with all the Oracle/Sun changes I
haven't been paying as close attention to big reports on zfs-discuss
as I used to.
OpenIndiana b147
18 matches
Mail list logo