Alexandros Kostopoulos wrote:
> I would like to use sqlite as a FIFO buffer.

In SQLite, tables are stored as a B-tree (indexed by the ROWID, or by
the INTEGER PRIMARY KEY if you have declared one).

When you remove the oldest entry, you get a hole in the first page of
the table.  When you add a new entry, it gets added to the last page of
the table.

Empty pages are reused when SQLite needs to allocate a new page.

(B-tree pages can be split or joined when SQLite thinks that they are
too full or too empty, so getting empty pages, or allocating new ones,
might not happen exactly when you think it should.)

> - How would sqlite scale as a FIFO buffer, for a DB size in the order
> of a few GBs, for both read and write operations?

Not worse than any other DB organization with similar amounts of
additions/deletions.  A FIFO is certainly better than a DB where
random records are deleted, because in the latter case you'd have
unused space (taking up I/O bandwidth) all over the table.

> can I bound the maximum db file size by making sure that I delete as
> much data as needed before inserting new data?

B-tree reorganizations might give you several partially filled pages,
but there is a lower limit of the amount of data in one page, so there
is an upper bound on the DB file size.


Regards,
Clemens
_______________________________________________
sqlite-users mailing list
[email protected]
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to