On Thu, 18 Nov 2010 01:45:22 -0800, Yang <teddyyyy...@gmail.com>
wrote:

>I wonder if I can do this efficiently in sqlite:
>
>for a row, and one column (most likely blob type),  I want to append
>some value to the end of the blob, I do this many times. then I may
>read up all the appended sections and write out the entire blob with a
>new value
>
>
>without any special support, I can read existing blob, add some new
>data to it, and update the column with the new blob value.
>but because most incremental updates are small, reading the majority
>of the old blob is a waste of time.
>
>ideally this is a sparse table ( each append creates a new "column",
>there is no limit on how many columns each row could have, or any
>schema for columns, for that matter). But I can't find an
>off-the-shelf solution for sparse tables. (can't use Columnar  DB,
>since my access pattern is still first based on rows)

A variable number of columns is not a good design:

1) It smells like a repeating group or array, which is not correct
for a relational schema

2) Changing a schema on the fly is a bad idea, as it invalidates all
sqlite3_prepared() statements.

3) All rows would have the number of columns needed by the row with
the highest number of updates, most of them would be NULL

Repeating groups and arrays can be prevented by adding a row for
every append action. In order to do so, expand the primary key of
the row with a fragment sequence number (or date/timestamp).

CREATE TABLE Accumulatedblobs (
        identity     INTEGER,
        fragmentseq  INTEGER,
        fragment     BLOB,
        PRIMARY KEY (identity,fragmentseq)
);

If the number of rows becomes a concern you could run a background
optimization process to periodically collect all fragment rows with
the same identity into one.

Interesting to read:
http://www.thethirdmanifesto.com/ 
-- 
  (  Kees Nuyt
  )
c[_]
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to