Chris Mason wrote:
> > So I have the priorty ordering:
> >
> >   blocks(i) -> root(i) -> blocks(i+1) -> root(i+1) -> etc
> >
> > And it would be possible to compress that slightly to:
> >
> >   root(i-1) + blocks(i) -> root(i) + blocks(i+1) -> etc
> 
> Then the io borders would benefit you as well.  Anywhere you wait_on_buffer
> because that buffer has to hit disk before you can proceed is a performance
> hit.  It won't fix many book keeping problems, but it will make it easier
> to keep things flowing to disk.

Ah, I see.  I was assuming I'd sleep until the I/O completion, wake up
and instantly handle the metaroot I/O start, sleep again and wake up to
do the next phase transition.  Actually, I don't see why that wouldn't
work.
 
> Ok, as long as we understand why the current method is good, we can talk
> about ways to make it better ;-)  Instead of forking a thread for each
> sync, I would rather allow the FS to start a thread on mount, and use that
> for syncing.  Either way, it is something that could be experimented with,
> to see if the cost/complexity is worth it.

Interestingly, that's exactly what I do.  It's called "tuxdemon".  It
already has to have some way to know when to quit - it watches a
variable that gets set during [the regular close-down sequence].  It
sounds promising, *but*, what about those file systems that are content
to let bdflush do their dirty work (groan) for them?  Why have a couple
dozen extra threads just for them?  I know it's not much overhead to
have a sleeping thread around, but still...

--
Daniel
-
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to [EMAIL PROTECTED]

Reply via email to