[zfs-discuss] FW: Setting default user/group quotas?

2012-04-24 Thread Fred Liu


-Original Message-
From: Fred Liu 
Sent: 星期二, 四月 24, 2012 11:41
To: develo...@lists.illumos.org
Subject: Setting default user/group quotas?

It seems this feature is still not there yet. Any plan to do it? Or is it hard 
to do it?


Thanks.

Fred
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Two disks giving errors in a raidz pool, advice needed

2012-04-24 Thread Tim Cook
On Tue, Apr 24, 2012 at 12:16 AM, Matt Breitbach
matth...@flash.shanje.comwrote:

 So this is a point of debate that probably deserves being brought to the
 floor (probably for the umpteenth time, but indulge me).  I've heard from
 several people that I'd consider experts that once per year scrubbing is
 sufficient, once per quarter is _possibly_ excessive, and once a week is
 downright overkill.  Since scrub thrashes your disk, I'd like to avoid it
 if
 at all possible.

 My opinion is that it depends on the data.  If it's all data at rest, ZFS
 can't correct bit-rot if it's not read out on a regular interval.

 My biggest question on this?  How often does bit-rot occur on media that
 isn't read or written to excessively, but just spinning most of the day and
 only has 10-20GB physically read from the spindles daily?  We all know as
 data ages, it gets accessed less and less frequently.  At what point should
 you be scrubbing that old data every few weeks to make sure a bit or two
 hasn't flipped?

 FYI - I personally scrub once per month.  Probably overkill for my data,
 but
 I'm paranoid like that.

 -Matt



 -Original Message-


 How often do you normally run a scrub, before this happened?  It's
 possible they were accumulating for a while but went undetected for
 lack of read attempts to the disk.  Scrub more often!

 --
 Dan.





Personally unless the dataset is huge and you're using z3, I'd be scrubbing
once a week.  Even if it's z3, just do a window on Sunday's or something so
that you at least make it through the whole dataset at least once a month.

There's no reason NOT to scrub that I can think of other than the overhead
- which shouldn't matter if you're doing it during off hours.

--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Two disks giving errors in a raidz pool, advice needed

2012-04-24 Thread Jim Klimov

On 2012-04-24 19:14, Tim Cook wrote:

Personally unless the dataset is huge and you're using z3, I'd be
scrubbing once a week.  Even if it's z3, just do a window on Sunday's or
something so that you at least make it through the whole dataset at
least once a month.


+1 I guess
Among other considerations, if the scrub does find irrepairable errors,
you might have some recent-enough backups or other sources of the data,
so the situation won't be as fatal as when you look for errors once a
year ;)


There's no reason NOT to scrub that I can think of other than the
overhead - which shouldn't matter if you're doing it during off hours.


I heard a rumor that HDDs can detect reading flaky sectors
(i.e. detect a bit-rot error and recover thanks to ECC), and
in this case they would automatically remap the revocered
sector. So reading the disks in (logical) locations where
your data is known to be may be a good thing to prolong its
available life.

This of course relies kinda on disk reliability - i.e. it
should be rated 24/7 and within warranted age (mechanics
should be within acceptable wear). No guarantees with other
drives, although I don't think weekly scrubs would be fatal.

If only ZFS could queue scrubbing reads more linearly... ;)

//Jim


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Two disks giving errors in a raidz pool, advice needed

2012-04-24 Thread Richard Elling
On Apr 24, 2012, at 8:35 AM, Jim Klimov wrote:

 On 2012-04-24 19:14, Tim Cook wrote:
 Personally unless the dataset is huge and you're using z3, I'd be
 scrubbing once a week.  Even if it's z3, just do a window on Sunday's or
 something so that you at least make it through the whole dataset at
 least once a month.

It depends. There are cascading failure modes in your system that are not
media related and cause bring your system to its knees. Scrubs and resilvers
can trigger or exacerbate these.

 +1 I guess
 Among other considerations, if the scrub does find irrepairable errors,
 you might have some recent-enough backups or other sources of the data,
 so the situation won't be as fatal as when you look for errors once a
 year ;)

There is considerable evidence that scrubs propagate errors for some systems
(no such evidence for ZFS systems). So it is not a good idea to have a blanket
scrub policy with high frequency.

 
 There's no reason NOT to scrub that I can think of other than the
 overhead - which shouldn't matter if you're doing it during off hours.
 
 I heard a rumor that HDDs can detect reading flaky sectors
 (i.e. detect a bit-rot error and recover thanks to ECC), and
 in this case they would automatically remap the revocered
 sector. So reading the disks in (logical) locations where
 your data is known to be may be a good thing to prolong its
 available life.

It is a SMART feature and the disks do it automatically for you.
 -- richard

--
ZFS Performance and Training
richard.ell...@richardelling.com
+1-760-896-4422







___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss