Re: [zfs-discuss] Sidebar to ZFS Availability discussion

2008-09-02 Thread Bill Sommerfeld
On Sun, 2008-08-31 at 12:00 -0700, Richard Elling wrote: 2. The algorithm *must* be computationally efficient. We are looking down the tunnel at I/O systems that can deliver on the order of 5 Million iops. We really won't have many (any?) spare cycles to play with.

Re: [zfs-discuss] Sidebar to ZFS Availability discussion

2008-09-02 Thread Bill Sommerfeld
On Sun, 2008-08-31 at 15:03 -0400, Miles Nordin wrote: It's sort of like network QoS, but not quite, because: (a) you don't know exactly how big the ``pipe'' is, only approximately, In an ip network, end nodes generally know no more than the pipe size of the first hop -- and in

Re: [zfs-discuss] Sidebar to ZFS Availability discussion

2008-09-02 Thread Miles Nordin
bs == Bill Sommerfeld [EMAIL PROTECTED] writes: bs In an ip network, end nodes generally know no more than the bs pipe size of the first hop -- and in some cases (such as true bs CSMA networks like classical ethernet or wireless) only have bs an upper bound on the pipe size.

Re: [zfs-discuss] Sidebar to ZFS Availability discussion

2008-09-01 Thread Robert Milkowski
Hello Miles, Sunday, August 31, 2008, 8:03:45 PM, you wrote: dc == David Collier-Brown [EMAIL PROTECTED] writes: MN dc one discovers latency growing without bound on disk MN dc saturation, MN yeah, ZFS needs the same thing just for scrub. MN I guess if the disks don't let you tag

Re: [zfs-discuss] Sidebar to ZFS Availability discussion

2008-09-01 Thread David Collier-Brown
Richard Elling wrote: [what usually concerns me is that the software people spec'ing device drivers don't seem to have much training in control systems, which is what is being designed] Or try to develop safety-critical systems based on best effort instead of first developing a clear and

[zfs-discuss] Sidebar to ZFS Availability discussion

2008-08-31 Thread David Collier-Brown
Re Availability: ZFS needs to handle disk removal / driver failure better A better option would be to not use this to perform FMA diagnosis, but instead work into the mirror child selection code. This has already been alluded to before, but it would be cool to keep track of latency over

Re: [zfs-discuss] Sidebar to ZFS Availability discussion

2008-08-31 Thread Miles Nordin
dc == David Collier-Brown [EMAIL PROTECTED] writes: dc one discovers latency growing without bound on disk dc saturation, yeah, ZFS needs the same thing just for scrub. I guess if the disks don't let you tag commands with priorities, then you have to run them at slightly below max

Re: [zfs-discuss] Sidebar to ZFS Availability discussion

2008-08-31 Thread Richard Elling
Miles Nordin wrote: dc == David Collier-Brown [EMAIL PROTECTED] writes: dc one discovers latency growing without bound on disk dc saturation, yeah, ZFS needs the same thing just for scrub. ZFS already schedules scrubs at a low priority. However, once the iops