On Sun, 2 May 2010, Dave Pooser wrote:
If my system is going to fail under the stress of a
scrub, it's going to
fail under the stress of a resilver. From my
perspective, I'm not as scared
I don't disagree with any of the opinions you stated
except to point
out that resilver will
On Sun, May 2, 2010 14:12, Richard Elling wrote:
On May 1, 2010, at 1:56 PM, Bob Friesenhahn wrote:
On Fri, 30 Apr 2010, Freddie Cash wrote:
Without a periodic scrub that touches every single bit of data in the
pool, how can you be sure
that 10-year files that haven't been opened in 5 years
On May 3, 2010, at 2:38 PM, David Dyer-Bennet wrote:
On Sun, May 2, 2010 14:12, Richard Elling wrote:
On May 1, 2010, at 1:56 PM, Bob Friesenhahn wrote:
On Fri, 30 Apr 2010, Freddie Cash wrote:
Without a periodic scrub that touches every single bit of data in the
pool, how can you be sure
On Apr 30, 2010, at 11:44 AM, Freddie Cash wrote:
Sure, you don't have to scrub every single week. But you definitely want to
scrub more than once over the lifetime of the pool.
Yes. There have been studies of this and the results depend on the technical
(probabilities) and the comfort level
On Apr 29, 2010, at 11:55 AM, Katzke, Karl wrote:
The server is a Fujitsu RX300 with a Quad Xeon 1.6GHz, 6G ram, 8x400G
SATA through a U320SCSI-SATA box - Infortrend A08U-G1410, Sol10u8.
slow disks == poor performance
Should have enough oompf, but when you combine snapshot with a
On Mon, May 3, 2010 17:02, Richard Elling wrote:
On May 3, 2010, at 2:38 PM, David Dyer-Bennet wrote:
On Sun, May 2, 2010 14:12, Richard Elling wrote:
On May 1, 2010, at 1:56 PM, Bob Friesenhahn wrote:
On Fri, 30 Apr 2010, Freddie Cash wrote:
Without a periodic scrub that touches every
Hi Bob,
It is necessary to look at all the factors which
might result in data
loss before deciding what the most effective steps
are to minimize
the probability of loss.
Bob
I am under the impression that exactly those were the considerations for both
the ZFS designers to implement a
On Sun, 2 May 2010, Tonmaus wrote:
I am under the impression that exactly those were the considerations
for both the ZFS designers to implement a scrub function to ZFS and
the author of Best Practises to recommend performing this function
frequently. I am hearing you are coming to a
On May 1, 2010, at 1:56 PM, Bob Friesenhahn wrote:
On Fri, 30 Apr 2010, Freddie Cash wrote:
Without a periodic scrub that touches every single bit of data in the pool,
how can you be sure
that 10-year files that haven't been opened in 5 years are still intact?
You don't. But it seems that
- Roy Sigurd Karlsbakk r...@karlsbakk.net skrev:
Hi all
I have a test system with snv134 and 8x2TB drives in RAIDz2 and
currently no Zil or L2ARC. I noticed the I/O speed to NFS shares on
the testpool drops to something hardly usable while scrubbing the
pool.
How can I address this?
On Sun, 2 May 2010, Richard Elling wrote:
These calculations are based on fixed MTBF. But disk MTBF decreases with
age. Most disks are only rated at 3-5 years of expected lifetime. Hence,
archivists
use solutions with longer lifetimes (high quality tape = 30 years) and plans for
migrating the
On 5/2/10 3:12 PM, Bob Friesenhahn bfrie...@simple.dallas.tx.us wrote:
On the flip-side, using 'zfs scrub' puts more stress on the system
which may make it more likely to fail. It increases load on the power
supplies, CPUs, interfaces, and disks. A system which might work fine
under normal
On Sun, 2 May 2010, Dave Pooser wrote:
If my system is going to fail under the stress of a scrub, it's going to
fail under the stress of a resilver. From my perspective, I'm not as scared
I don't disagree with any of the opinions you stated except to point
out that resilver will usually hit
On May 2, 2010, at 12:05 PM, Roy Sigurd Karlsbakk wrote:
- Roy Sigurd Karlsbakk r...@karlsbakk.net skrev:
Hi all
I have a test system with snv134 and 8x2TB drives in RAIDz2 and
currently no Zil or L2ARC. I noticed the I/O speed to NFS shares on
the testpool drops to something hardly
On Fri, 30 Apr 2010, Freddie Cash wrote:
Without a periodic scrub that touches every single bit of data in the pool, how
can you be sure
that 10-year files that haven't been opened in 5 years are still intact?
You don't. But it seems that having two or three extra copies of the
data on
In my opinion periodic scrubs are most useful for
pools based on
mirrors, or raidz1, and much less useful for pools
based on raidz2 or
raidz3. It is useful to run a scrub at least once on
a well-populated
new pool in order to validate the hardware and OS,
but otherwise, the
scrub is
On Thu, April 29, 2010 17:35, Bob Friesenhahn wrote:
In my opinion periodic scrubs are most useful for pools based on
mirrors, or raidz1, and much less useful for pools based on raidz2 or
raidz3. It is useful to run a scrub at least once on a well-populated
new pool in order to validate the
On Thu, 29 Apr 2010, Tonmaus wrote:
Recommending to not using scrub doesn't even qualify as a
workaround, in my regard.
As a devoted believer in the power of scrub, I believe that after the
OS, power supplies, and controller have been verified to function with
a good scrubbing, if there is
On Fri, Apr 30, 2010 at 11:35 AM, Bob Friesenhahn
bfrie...@simple.dallas.tx.us wrote:
On Thu, 29 Apr 2010, Tonmaus wrote:
Recommending to not using scrub doesn't even qualify as a workaround, in
my regard.
As a devoted believer in the power of scrub, I believe that after the OS,
power
On Thu, 29 Apr 2010, Tonmaus wrote:
Recommending to not using scrub doesn't even qualify as a
workaround, in my regard.
As a devoted believer in the power of scrub, I believe that after the
OS, power supplies, and controller have been verified to function with
a good scrubbing, if
On Fri, April 30, 2010 13:44, Freddie Cash wrote:
On Fri, Apr 30, 2010 at 11:35 AM, Bob Friesenhahn
bfrie...@simple.dallas.tx.us wrote:
On Thu, 29 Apr 2010, Tonmaus wrote:
Recommending to not using scrub doesn't even qualify as a workaround,
in
my regard.
As a devoted believer in the
Indeed the scrub seems to take too much resources from a live system.
For instance i have a server with 24 disks (SATA 1TB) serving as NFS
store to a linux machine holding user mailboxes. I have around 200
users, with maybe 30-40% of active users at the same time.
As soon as the scrub process
I got this hint from Richard Elling, but haven't had time to test it much.
Perhaps someone else could help?
roy
Interesting. If you'd like to experiment, you can change the limit of the
number of scrub I/Os queued to each vdev. The default is 10, but that
is too close to the normal
On 28/04/2010 21:39, David Dyer-Bennet wrote:
The situations being mentioned are much worse than what seem reasonable
tradeoffs to me. Maybe that's because my intuition is misleading me about
what's available. But if the normal workload of a system uses 25% of its
sustained IOPS, and a scrub
On 29 April, 2010 - Roy Sigurd Karlsbakk sent me these 10K bytes:
I got this hint from Richard Elling, but haven't had time to test it much.
Perhaps someone else could help?
roy
Interesting. If you'd like to experiment, you can change the limit of the
number of scrub I/Os queued to
On 29 April, 2010 - Tomas Ögren sent me these 5,8K bytes:
On 29 April, 2010 - Roy Sigurd Karlsbakk sent me these 10K bytes:
I got this hint from Richard Elling, but haven't had time to test it much.
Perhaps someone else could help?
roy
Interesting. If you'd like to
On Apr 29, 2010, at 5:52 AM, Tomas Ögren wrote:
On 29 April, 2010 - Tomas Ögren sent me these 5,8K bytes:
On 29 April, 2010 - Roy Sigurd Karlsbakk sent me these 10K bytes:
I got this hint from Richard Elling, but haven't had time to test it much.
Perhaps someone else could help?
roy
On 29 April, 2010 - Richard Elling sent me these 2,5K bytes:
With these lower numbers, our pool is much more responsive over NFS..
But taking snapshots is quite bad.. A single recursive snapshot over
~800 filesystems took about 45 minutes, with NFS operations taking 5-10
seconds..
The server is a Fujitsu RX300 with a Quad Xeon 1.6GHz, 6G ram, 8x400G
SATA through a U320SCSI-SATA box - Infortrend A08U-G1410, Sol10u8.
slow disks == poor performance
Should have enough oompf, but when you combine snapshot with a
scrub/resilver, sync performance gets abysmal.. Should
On Thu, 29 Apr 2010, Roy Sigurd Karlsbakk wrote:
While there may be some possible optimizations, i'm sure everyone
would love the random performance of mirror vdevs, combined with the
redundancy of raidz3 and the space of a raidz1. However, as in all
systems, there are tradeoffs.
In my
On 04/30/10 10:35 AM, Bob Friesenhahn wrote:
On Thu, 29 Apr 2010, Roy Sigurd Karlsbakk wrote:
While there may be some possible optimizations, i'm sure everyone
would love the random performance of mirror vdevs, combined with the
redundancy of raidz3 and the space of a raidz1. However, as in
Zfs scrub needs to access all written data on all
disks and is usually
disk-seek or disk I/O bound so it is difficult to
keep it from hogging
the disk resources. A pool based on mirror devices
will behave much
more nicely while being scrubbed than one based on
RAIDz2.
Experience
On Apr 28, 2010, at 1:34 AM, Tonmaus wrote:
Zfs scrub needs to access all written data on all
disks and is usually
disk-seek or disk I/O bound so it is difficult to
keep it from hogging
the disk resources. A pool based on mirror devices
will behave much
more nicely while being scrubbed
On Wed, Apr 28 at 1:34, Tonmaus wrote:
Zfs scrub needs to access all written data on all
disks and is usually
disk-seek or disk I/O bound so it is difficult to
keep it from hogging
the disk resources. A pool based on mirror devices
will behave much
more nicely while being scrubbed than one
Hi Eric,
While there may be some possible optimizations, i'm
sure everyone
would love the random performance of mirror vdevs,
combined with the
redundancy of raidz3 and the space of a raidz1.
However, as in all
ystems, there are tradeoffs.
I think we all may agree that the topic here is
On 28 April, 2010 - Eric D. Mudama sent me these 1,6K bytes:
On Wed, Apr 28 at 1:34, Tonmaus wrote:
Zfs scrub needs to access all written data on all
disks and is usually
disk-seek or disk I/O bound so it is difficult to
keep it from hogging
the disk resources. A pool based on mirror
On Wed, 28 Apr 2010, Richard Elling wrote:
the disk resources. A pool based on mirror devices will behave
much more nicely while being scrubbed than one based on RAIDz2.
The data I have does not show a difference in the disk loading while
scrubbing for different pool configs. All HDDs
adding on...
On Apr 28, 2010, at 8:57 AM, Tomas Ögren wrote:
On 28 April, 2010 - Eric D. Mudama sent me these 1,6K bytes:
On Wed, Apr 28 at 1:34, Tonmaus wrote:
Zfs scrub needs to access all written data on all
disks and is usually
disk-seek or disk I/O bound so it is difficult to
keep
On Wed, April 28, 2010 10:16, Eric D. Mudama wrote:
On Wed, Apr 28 at 1:34, Tonmaus wrote:
Zfs scrub needs to access all written data on all
disks and is usually
disk-seek or disk I/O bound so it is difficult to
keep it from hogging
the disk resources. A pool based on mirror devices
will
On Tue, 27 Apr 2010, Roy Sigurd Karlsbakk wrote:
I have a test system with snv134 and 8x2TB drives in RAIDz2 and
currently no Zil or L2ARC. I noticed the I/O speed to NFS shares on
the testpool drops to something hardly usable while scrubbing the
pool.
How can I address this? Will adding
On 04/28/10 03:17 AM, Roy Sigurd Karlsbakk wrote:
Hi all
I have a test system with snv134 and 8x2TB drives in RAIDz2 and currently no
Zil or L2ARC. I noticed the I/O speed to NFS shares on the testpool drops to
something hardly usable while scrubbing the pool.
Is that small random or
On 04/28/10 10:01 AM, Bob Friesenhahn wrote:
On Wed, 28 Apr 2010, Ian Collins wrote:
On 04/28/10 03:17 AM, Roy Sigurd Karlsbakk wrote:
Hi all
I have a test system with snv134 and 8x2TB drives in RAIDz2 and
currently no Zil or L2ARC. I noticed the I/O speed to NFS shares on
the testpool
42 matches
Mail list logo