Am 26.01.2006 um 02:54 schrieb [EMAIL PROTECTED]:

deminix <[EMAIL PROTECTED]> wrote:
I was curious if a single page of the database was limited to at most one
record, aka can records be packed into a single page?

Multiple small records can fit on one page.  Or a large record
can span multiple pages.


If it does pack records, then the purpose of the page size becomes less obvious to me. It can certainly be used to match the size of the underlying
OS/hardware more efficiently, but could it serve another purpose?


The file is read and written a page at a time.  If you
have a large page size (32K, say) but only want to read
or write a few bytes, you still have to do I/O on the
whole 32K.  This argues for smaller pages.

On the other hand, there is a fixed memory space, disk
space, and processing time overhead associated with each
page.  The smaller the pages, the more overhead for the
same amount of data.  This argues for larger pages.

A 1K page works well on unix for most applications.  But
it is nice to have the flexibility to adjust the page size
up or down for those cases where a different page size
might give better performance.

I did some benchmarks for my two table construct as outlined in my last post. Working with different page sizes can lead to noticable speed improvements.

16384, insert: 73.53 s
16384, select: 27.80 s

8192, insert: 68,96 s
8192, select: 16,96 s

4096, insert: 66,38 s
4096, select, 16,59 s

2048, insert: 68,46 s
2048, select: 19,66 s

1024, insert: 87.64 s
1024, select: forgot to record that one

The average blob data in my tables is 3859 bytes in size. The benchmarks were run with 6394 entries each and do necessarily include application times for working on / creating the data including additional I/O. But still, it does ilustrate that chunk size can have a pretty visible impact on performance.

Felix

Reply via email to