On Mon, Mar 22, 2010 at 12:21 PM, Richard Elling
wrote:
> Yes, it is better. But still nowhere near platter speed. All it takes is
> one little seek...
>
True, dat. I find that scrubs start very slow (< 20MB/s) with the disks at
near-100% utilization. Towards the end of the scrub, speeds are u
> In other words, there
> is no
> case where multiple scrubs compete for the resources of a single disk
> because
> a single disk only participates in one pool.
Excellent point. However, the problem scenario was described as SAN. I can
easily imagine a scenario where some SAN administrator crea
On Mar 22, 2010, at 11:33 AM, Bill Sommerfeld wrote:
> On 03/22/10 11:02, Richard Elling wrote:
>> Scrub tends to be a random workload dominated by IOPS, not bandwidth.
>
> you may want to look at this again post build 128; the addition of
> metadata prefetch to scrub/resilver in that build appear
On 03/22/10 11:02, Richard Elling wrote:
> Scrub tends to be a random workload dominated by IOPS, not bandwidth.
you may want to look at this again post build 128; the addition of
metadata prefetch to scrub/resilver in that build appears to have
dramatically changed how it performs (largely for th
On Mar 22, 2010, at 10:36 AM, Svein Skogen wrote:
> On 22.03.2010 18:10, Richard Elling wrote:
>> On Mar 22, 2010, at 7:30 AM, Svein Skogen wrote:
>>
>>> On 22.03.2010 13:54, Edward Ned Harvey wrote:
> IIRC it's "zpool scrub", and last time I checked, the zpool command
> exited (with statu
On 22.03.2010 18:10, Richard Elling wrote:
On Mar 22, 2010, at 7:30 AM, Svein Skogen wrote:
On 22.03.2010 13:54, Edward Ned Harvey wrote:
IIRC it's "zpool scrub", and last time I checked, the zpool command
exited (with status 0) as soon as it had started the scrub. Your
command
would start _AL
On Mar 22, 2010, at 7:30 AM, Svein Skogen wrote:
> On 22.03.2010 13:54, Edward Ned Harvey wrote:
>>> IIRC it's "zpool scrub", and last time I checked, the zpool command
>>> exited (with status 0) as soon as it had started the scrub. Your
>>> command
>>> would start _ALL_ scrubs in paralell as a re
On 22.03.2010 13:54, Edward Ned Harvey wrote:
IIRC it's "zpool scrub", and last time I checked, the zpool command
exited (with status 0) as soon as it had started the scrub. Your
command
would start _ALL_ scrubs in paralell as a result.
You're right. I did that wrong. Sorry 'bout that.
So ei
On 22/03/2010 12:50, Edward Ned Harvey wrote:
no, it is not a subdirectory it is a filesystem mounted on top of the
subdirectory.
So unless you use NFSv4 with mirror mounts or an automounter other NFS
version will show you contents of a directory and not a filesystem. It
doesn't matter if it is a
> IIRC it's "zpool scrub", and last time I checked, the zpool command
> exited (with status 0) as soon as it had started the scrub. Your
> command
> would start _ALL_ scrubs in paralell as a result.
You're right. I did that wrong. Sorry 'bout that.
So either way, if there's a zfs property for s
> no, it is not a subdirectory it is a filesystem mounted on top of the
> subdirectory.
> So unless you use NFSv4 with mirror mounts or an automounter other NFS
> version will show you contents of a directory and not a filesystem. It
> doesn't matter if it is a zfs or not.
Ok, I learned something
On 22.03.2010 13:35, Edward Ned Harvey wrote:
Does cron happen to know how many other scrubs are running, bogging
down
your IO system? If the scrub scheduling was integrated into zfs itself,
It doesn't need to.
Crontab entry: /root/bin/scruball.sh
/root/bin/scruball.sh:
#!/usr/bin/bash
for f
> Does cron happen to know how many other scrubs are running, bogging
> down
> your IO system? If the scrub scheduling was integrated into zfs itself,
It doesn't need to.
Crontab entry: /root/bin/scruball.sh
/root/bin/scruball.sh:
#!/usr/bin/bash
for filesystem in filesystem1 filesystem2 filesy
On 22/03/2010 08:49, Andrew Gabriel wrote:
Robert Milkowski wrote:
To add my 0.2 cents...
I think starting/stopping scrub belongs to cron, smf, etc. and not to
zfs itself.
However what would be nice to have is an ability to freeze/resume a
scrub and also limit its rate of scrubbing.
One of
On 22/03/2010 01:13, Edward Ned Harvey wrote:
Actually ... Why should there be a ZFS property to share NFS, when you
can
already do that with "share" and "dfstab?" And still the zfs property
exists.
Probably because it is easy to create new filesystems and clone them;
as
On 21.03.2010 01:25, Robert Milkowski wrote:
To add my 0.2 cents...
I think starting/stopping scrub belongs to cron, smf, etc. and not to
zfs itself.
However what would be nice to have is an ability to freeze/resume a
scrub and also limit its rate of scrubbing.
One of the reason is that when w
On 22.03.2010 02:13, Edward Ned Harvey wrote:
Actually ... Why should there be a ZFS property to share NFS, when you
can
already do that with "share" and "dfstab?" And still the zfs property
exists.
Probably because it is easy to create new filesystems and clone them;
as
NFS only works per f
Robert Milkowski wrote:
To add my 0.2 cents...
I think starting/stopping scrub belongs to cron, smf, etc. and not to
zfs itself.
However what would be nice to have is an ability to freeze/resume a
scrub and also limit its rate of scrubbing.
One of the reason is that when working in SAN envi
> >Actually ... Why should there be a ZFS property to share NFS, when you
> can
> >already do that with "share" and "dfstab?" And still the zfs property
> >exists.
>
> Probably because it is easy to create new filesystems and clone them;
> as
> NFS only works per filesystem you need to edit dfsta
On 21.03.2010 14:26, Edward Ned Harvey wrote:
Most software introduced in Linux clearly violates the "UNIX
philosophy".
Hehehe, don't get me started on OSX. ;-) And for the love of all things
sacred, never say OSX is not UNIX. I made that mistake once. Which is not
to say I was proven wrong
>> That would add unnecessary code to the ZFS layer for something that
>> cron can handle in one line.
>
>Actually ... Why should there be a ZFS property to share NFS, when you can
>already do that with "share" and "dfstab?" And still the zfs property
>exists.
Probably because it is easy to crea
> Most software introduced in Linux clearly violates the "UNIX
> philosophy".
Hehehe, don't get me started on OSX. ;-) And for the love of all things
sacred, never say OSX is not UNIX. I made that mistake once. Which is not
to say I was proven wrong or anything - but it's apparently a subjec
> That would add unnecessary code to the ZFS layer for something that
> cron can handle in one line.
Actually ... Why should there be a ZFS property to share NFS, when you can
already do that with "share" and "dfstab?" And still the zfs property
exists.
I think the proposed existence of a ZFS sc
To add my 0.2 cents...
I think starting/stopping scrub belongs to cron, smf, etc. and not to
zfs itself.
However what would be nice to have is an ability to freeze/resume a
scrub and also limit its rate of scrubbing.
One of the reason is that when working in SAN environments one have to
tak
On Sat, Mar 20, 2010 at 5:36 PM, Bob Friesenhahn <
bfrie...@simple.dallas.tx.us> wrote:
> On Sat, 20 Mar 2010, Tim Cook wrote:
>
>>
>> Funny (ironic?) you'd quote the UNIX philosophy when the Linux folks have
>> been running around since day
>> one claiming the basic concept of ZFS fly's in the fa
On Sat, 20 Mar 2010, Tim Cook wrote:
Funny (ironic?) you'd quote the UNIX philosophy when the Linux folks have been
running around since day
one claiming the basic concept of ZFS fly's in the face of that very concept.
Rather than do one thing
well, it's unifying two things (file system and r
On 20.03.2010 23:00, Gary Gendel wrote:
I'm not sure I like this at all. Some of my pools take hours to scrub. I have
a cron job run scrubs in sequence... Start one pool's scrub and then poll
until it's finished, start the next and wait, and so on so I don't create too
much load and bring a
On Sat, Mar 20, 2010 at 5:00 PM, Gary Gendel wrote:
> I'm not sure I like this at all. Some of my pools take hours to scrub. I
> have a cron job run scrubs in sequence... Start one pool's scrub and then
> poll until it's finished, start the next and wait, and so on so I don't
> create too much
On Sat, Mar 20, 2010 at 4:00 PM, Richard Elling wrote:
> On Mar 20, 2010, at 12:07 PM, Svein Skogen wrote:
> > We all know that data corruption may happen, even on the most reliable of
> hardware. That's why zfs har pool scrubbing.
> >
> > Could we introduce a zpool option (as in zpool set )
> fo
I'm not sure I like this at all. Some of my pools take hours to scrub. I have
a cron job run scrubs in sequence... Start one pool's scrub and then poll
until it's finished, start the next and wait, and so on so I don't create too
much load and bring all I/O to a crawl.
The job is launched on
On Mar 20, 2010, at 12:07 PM, Svein Skogen wrote:
> We all know that data corruption may happen, even on the most reliable of
> hardware. That's why zfs har pool scrubbing.
>
> Could we introduce a zpool option (as in zpool set ) for
> "scrub period", in "number of hours" (with 0 being no autom
On 20.03.2010 20:53, Giovanni Tirloni wrote:
On Sat, Mar 20, 2010 at 4:07 PM, Svein Skogen mailto:sv...@stillbilde.net>> wrote:
We all know that data corruption may happen, even on the most
reliable of hardware. That's why zfs har pool scrubbing.
Could we introduce a zpool option (a
On Sat, Mar 20, 2010 at 4:07 PM, Svein Skogen wrote:
> We all know that data corruption may happen, even on the most reliable of
> hardware. That's why zfs har pool scrubbing.
>
> Could we introduce a zpool option (as in zpool set ) for
> "scrub period", in "number of hours" (with 0 being no aut
We all know that data corruption may happen, even on the most reliable
of hardware. That's why zfs har pool scrubbing.
Could we introduce a zpool option (as in zpool set )
for "scrub period", in "number of hours" (with 0 being no automatic
scrubbing).
I see several modern raidcontrollers (s
34 matches
Mail list logo