On 8/23/06, R. Steve McKown <[EMAIL PROTECTED]> wrote:
The two suggestions below come from some shortcomings (?) I'm finding with the
LogWrite/LogRead interfaces.  Please tell me if I'm overlooking obvious
solutions using the existing interface definitions.

See comments below :-)

#1 record atomic LogRead.read()

I'm storing data objects of varying size, one per record, in a circular log
buffer.  These objects must be treated as atomic units of data.  The stm25p
implementation, even for circular logs, appears to correctly enforce
record-level atomicity for all functions except LogRead.read().  read() will
silently read a partial record if the supplied user buffer is smaller than
the record size.  This means that some operations, like placing the contents
of as many whole records as possible into a message_t's payload, require the
use of an intermediate buffer and a second memory copy for each record read.

I think a useful addition to the LogRead interface would be a record-atomic
readRecord().  It could function much like snprintf(), in that readDone() in
response to readRecord() would always return the size of the next record in
the log but only copy it into the user buffer if it is capable of holding the
entire record.  Such a function would reduce RAM usage and data copies in
many operations (like the above example) on log records containing variable
sized objects.

I make a distinction here between a record containing a variably sized object
and a record containing a variable number of fixed-sized objects, like
uint16_t's.  In the latter case the application code knows to read in
multiples of sizeof(uint16_t) so partial record reads are never an issue.

The specification intentionally doesn't allow you to find record-end
boundaries, so as not to force implementations to record unneeded
boundaries (which may clearly entail additional overhead). For
instance, the at45db implementation only remembers the end of the last
record in a page, but it doesn't know where "earlier" records ended.

If you are using variable-sized records, the expectation is that you
will record whatever extra metadata you need to find their boundaries.
Yes, this might require doing 2 reads per record (e.g., if each record
is of the form <length><data>).

To summarise: the goal was not to have a record-oriented log. Rather,
it was to have a log with reliability guarantees, that allows you to
fixed-record size things, variable-size record-like things, or
byte-stream like things. The cost is that doing any of those may be a
bit more effort than in an abstraction designed specifically for one
of those.

#2 - Buffer overrun indication

For circular logs containing valuable data, such as a sensor readings, it
might be important to know when LogWrite.write() has overwritten existing
records not yet extracted via LogRead.read().  One technique would be the
addition of a recsLost() method, which returns the number of records
overwritten since the last call to recsLost().

The specification of LogRead.currentOffset() says:
 /**
  * Return a "cookie" representing the current read offset within the
  * log. This cookie can be used in a subsequent seek operation to
  * return to the same place in the log (if it hasn't been overwritten).
  *
  * @return Cookie representing current offset.
  *   <code>SEEK_BEGINNING</code> will be returned if:<ul>
  *   <li> a write in a circular log overwrote the previous read position
  *   <li> seek was passed a cookie representing a position before the
  *        current beginning of a circular log
  *   </ul>
  *   Note that <code>SEEK_BEGINNING</code> can also be returned at
  *   other times (just after erasing a log, etc).
  */

so, if you know you weren't at the beginning of the log,
currentOffset() == SEEK_BEGINNING is a reliable indication that you
lost some records. I don't think recsLost() would be a good idea, as
it would (probably?) be a bad idea to make it work across reboots. If
you really need to know for some application, it's easy enough to
include a record number in your log entries.

David Gay
_______________________________________________
Tinyos-help mailing list
[email protected]
https://mail.millennium.berkeley.edu/cgi-bin/mailman/listinfo/tinyos-help

Reply via email to