I'm going to admit that I've always been confused about the deep
meaning of some of the buffer state bits.  There are four buffer state
bits that describe the relationship between the cached in a buffer and
data stored on disk:

  BH_Uptodate
  BH_Dirty
  BH_Mapped
  BH_New

Does anybody think that every combination of these bits expresses a
unique state that can actually exist?  I didn't think so.  The
question is not whether there are *some* redundant or impossible
states, but *how many* there are.

After considering this question carefully I now think there are only
four useful states.  A buffer is much like a processor register: its
function is to hold a piece of data, a disk block, so that it can be
accessed and/or modified efficiently.  At any given time the data in a
buffer can be known to be valid, and a block on disk can be known to
exist, or we may not know either of these things.  Combining the two
sets of possibilities gives four states:

  0: undef - Neither data on disk or in cache known to be valid
  1: known - Data on disk known to be valid
  2: dirty - Data in cache known to be valid
  3: clean - Data in cache known to match data on disk

Note that the opposite of "known to be valid" is "not known to be
valid", not "known to be invalid".

It is not correct to look at these four states as two bits that can be
manipulated independently.  These are four different states and we
most consider which operations are possible and what the state of the
buffer becomes after each operation.  When setting a new state, we
should always set both bits together.  We should not yield to the
temptation to change a single bit just because we know we can.  There
is no efficiency argument for doing this; we would just tie ourselves
to a particular encoding of the states and obfuscate the code.

In fact there are only three states that matter in terms of
maintaining data consistency - states 0, 2 and 3.  State 1 exists
purely to allow us to cache the block number in the buffer header
instead of searching the inode index structure for it every time. 
This is shown clearly in the state/transition diagram below.

Buffer operations permitted:

  load - read the buffer from disk
  save - write the buffer to disk
  replace - update the entire contents of the buffer
  modify - update some, not all of the buffer
  access - use the buffer data without changing it

Annotated state/transition network:

  undef:
    replace -> dirty
    modify or access not allowed, must load first
    load not allowed, must know block first
    save not allowed, no valid data
  
  known:
    replace -> dirty
    load -> clean
    modify or access not allowed, must load first
    save not allowed, no valid data

  dirty:
    replace -> dirty
    modify -> dirty
    save -> clean
    load not allowed, would lose valid data
    access is ok

  clean:
    replace -> dirty
    modify -> dirty
    load or save is redundant
    access is ok

There is a further state variable that interacts with buffer states:
the state of our program.  There are only a few program states that
interest us, but even so, multiplying by the number of buffer states
gives a transition network that is too large to be easily understood. 
This is where we fall back on a procedural analysis, and I won't get
into that today.

What is the use of knowing all this?  Well, for one thing, it shows
that our buffer state handling can be cleaned up and simplified quite
a lot.  Perhaps more importantly, it shows that we aren't taking some
of the optimizations that are possible.  For example, we can fill a
buffer with valid data (replace) and then allow a system write to
update part of that data without having saved the buffer first.  I
hope to elaborate on this over the next few days.  In the meantime,
anyone who is interested, please try to blow holes in my analysis. :-)

--
Daniel
-
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to [EMAIL PROTECTED]

Reply via email to