> -----Original Message-----
> From: Stephen C. Tweedie [mailto:[EMAIL PROTECTED]]
> Hi,
>
> On Tue, 12 Oct 1999 15:03:25 +0400, Hans Reiser <[EMAIL PROTECTED]> said:
>
> >> With journaling, however, we have a new problem.  We can have large
> >> amounts of dirty data pinned in memory, but we cannot actualy write
> >> that data to disk without first allocating more memory.
>
> > Trivia: I don't think this is a feature of journaling, but rather a
> > feature of a particular implementation of journaling.  Chris will
> > correct me if I err, but Chris's journaling doesn't have this
> > property.
>
> >From what I can see it does, in two places.  (Ext3 has similar
> properties in both places.)  Likewise, Chris will correct me if I'm
> wrong. :)
>
> In Reiserfs's journal_end(), a commit results in getblk() being called
> to produce one new journal block for every existing block in the entire
> transaction, and the transaction blocks are then copied to the journal.
> There is a strict ordering involved, and even if we batch the
> copy/writes one block at a time, that one-block allocation is still
> required before any subsequent blocks in the journal can be freed.
> (Reiserfs currently defaults to up to 128 such allocations being
> required before anything from the transaction can be freed.)
>
All true.  But shouldn't I be able to write function to reuse a buffer_head
for a different block without freeing it?  I realize the buffer cache
doesn't have a call to do it now, but it seems like it should be possible.

-chris

Reply via email to