Thanks Richard, that makes perfect sense.

Thanks Howard, but i don't know what you are talking about, so i will
google "copy-on-write".

Григорий Григоренко, Interesting! I'll consider this approach if at some
point i'm able to go "scorched earth" and start this from scratch, but at
this point i would have to change too much stuff.

I will go with the WAL solution for now. I'm just worried a buggy process
could hang while having a prepared statement open and cause the wal file to
grow forever, then causing errors in other processes.


On Wed, Nov 28, 2012 at 3:02 PM, Григорий Григоренко <[email protected]>wrote:

> Hi,
>
>
> CREATE TABLE rev(current); INSERT INTO rev VALUES(0);
> CREATE TABLE data(..., revision);
>
> Readers:
> SELECT * FROM data JOIN rev ON revision = current WHERE ... ;
> // or "SELECT current FROM rev" into var and passing it value in "SELECT *
> FROM data WHERE revision=?"
>
> Writer:
>
> // insert new records, old records remains
>
> SELECT current + 1 FROM rev; // into some variable
>
> BEGIN;
> INSERT INTO data(..., revision) VALUES (..., current );
> INSERT INTO data(..., revision) VALUES (..., current );
> ...
> ... repeat 1000 times
> ...
> COMMIT;
>
> BEGIN;
> INSERT INTO data(..., revision) VALUES (..., current );
> INSERT INTO data(..., revision) VALUES (..., current );
> ...
> ... repeat 1000 times
> ...
> COMMIT;
>
> // repeat inserting until all records are inserted; 1000 is a number of
> records taken by a wild guess )
>
> // now switch readers to new records
> BEGIN; UPDATE rev SET current = current + 1; COMMIT; // no need for a
> begin/commit just to point that this runs inside its own transaction
>
> // now delete old records again incrementally
>
> // repeat this block until records stop deleting from table
> BEGIN;
> SELECT Max(rowid) - 1000 FROM data; // into variable MaxId
> DELETE FROM data WHERE revision = current - 1 AND rowid > MaxId;
> COMMIT;
>
> // done, there are only new records in a table, repeat above steps to
> insert new bunch of records
>
>
>
>
> Regads,
> GG
>
>
> Wed 28 Nov 2012 09:47:50 от Alejandro Martínez <[email protected]>:
> >
> >
> >
>
>
> >
>
>
>
> >I have one process that each 30 minutes refills several tables in this
> >
> manner:
> >
>
> >
> sqlite3_open_v2(CACHEDB_PATH, &sqcache_conn, SQLITE_OPEN_CREATE |
> >
> SQLITE_OPEN_READWRITE, NULL)
> >
>
> >
> - For each table:
> >
>
> >
> begin deferred transaction; delete from [table];
> >
> insert into table ...
> >
> insert into table ...
> >
> insert into table ...
> >
> insert into table ...
> >
> [up to 180.000 rows]
> >
> commit;
> >
>
> >
> and sometimes the commit fails, so it is retried. (why would it fail? its
> >
> the only  writter)
> >
>
> >
> And then i have many other processes that open that sqlite database read
> >
> only. sqlite3_open_v2(_dbfile, &sqcache_conn, SQLITE_OPEN_READONLY, NULL)
> >
> and sqlite3_busy_timeout(sqcache_conn, 5000)
> >
>
> >
> These processes create very simple prepared statements to query that
> tables.
> >
>
> >
> And the big problem i'm having, is that when i step these prepared
> >
> statements, they lock for 5 seconds and then fail.
> >
>
> >
> And i put that busy timeout just for completeness, cause i wasn't expecting
> >
> any locking because for being a read only query.
> >
>
> >
> I really need these queries not to lock or fail.
> >
>
> >
> What am i doing wrong?
> >
> Any suggestions?
> >
>
> >
> Thank you,
> >
> Alejandro
> >
> _______________________________________________
> >
> sqlite-users mailing list
> >
> >[email protected]
> >
> >http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> >
> >
>
>
>
>
>
> >
>
> _______________________________________________
> sqlite-users mailing list
> [email protected]
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
_______________________________________________
sqlite-users mailing list
[email protected]
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to