>The overhead per record is 20 bytes (Dave's Message indicated 16 bytes ...
>I seem to rememeber it being 20, I'm not sure which is correct),
The overhead depends on the data itself. There is a min of 16 byte overhead
with OS 3.0 (14 with 2.0) , that is if your data is in multiples of 4 bytes.
If not, you need to round your data size up to the nearest 4 byte chunk
multiple and then add the 16 bytes. So far, this method has produced 100%
accurate predictions for me. Bob did warn that as the heap becomes
fragmented, the record overhead may also increase.
D
-----Original Message-----
From: Jason Dawes <[EMAIL PROTECTED]>
To: [EMAIL PROTECTED] <[EMAIL PROTECTED]>
Date: Wednesday, June 30, 1999 3:17 PM
Subject: RE: Database advice
>At 01:44 PM 6/30/99 +0100, Stuart Norton wrote:
>>ie. firstly we'll get a*******, and
>>then b*******. All we want is the stars. I presume this can be placed in a
>>single 16 byte record. Is this an effective use of the database though;
only
>>one record?
>
>The overhead per record is 20 bytes (Dave's Message indicated 16 bytes ...
>I seem to rememeber it being 20, I'm not sure which is correct), which
>means that storing 8 or 16 bytes per record would be very inefficient. You
>should probably think about putting a predetermined number of readings in
>each record (maybe everything for a particular minute)
>
>30 readings per record * 16 bytes per reading + 20 bytes record overhead *
>60 * 8 + 84 bytes database overhead = around 235K
>
>>
>>Also, what are the time overheads for writing a database record. It must
be
>>low, otherwise we will have to re-think. Your help would be appreciated.
>>
>
>reading/writing to a database is slowed compared to reading/writing from
>memory. You can compare it to reading/writing to a harddrive. i.e. It's
>very fast on a 2 second time scale. (If all you are doing is taking a
>measurement every two seconds & writing it to a database, you don't need to
>worry about it)
>
>