[sqlite] VACUUM and large indices: best practice?

2013-08-30 Thread uncle.f
Hello all,

I have a process that runs 24/7 and permanently inserts data into an SQLite
database:

1) Create new database at midnight, keep it open for writing for the next
24 hours
2) There are no deletions and not even reads during the database creation
3) I use only bulk inserts wrapped in a transaction (for each minute of
data) that may contain a few thousands of rows in every transaction.
4) My journal mode is MEMORY.
5) Once the insertion process is done with I build several indices and
close the database
6) After that the database file is moved over the network to a storage
device

The database will only be used again for reading and will remain unmodified
forever.

Each database is fairly large (3-5 GB) and considering it will never be
modified again I would like to take all possible measures to ensure that
the file size / fragmentation / data access times are all as low as
possible.

So my question is about how to ensure most efficient data allocation for
such scenario. I thought of several options:

1) VACUUM before creating indices, then create indices, then move database
off to storage
2) Create indices, then VACUUM, then move off to storage
3) Create indices, move to storage, VACUUM when already on storage (using
SQLite process running locally on storage device)

... or any other sequence of those 3 steps (vacuum, indexing, moving to
storage)

Another question would be, do I even need to VACUUM considering the way my
database is being produced?

I would appreciate a reply from somebody  who is aware of SQLite internals
rather than an "educated guess" :-)

Thank you for you time,

Andrei
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] SQLITE_CORRUPT when PRAGMA max_page_count is reached

2010-12-07 Thread uncle.f
Could Anybody please help?

We are running SQLite 3.7.3 on an embedded device and using C API to
interact with the DB. One of our goals is to ensure that the database
never grows past certain size (which is very small for this embedded box).

We open DB connection once and would like to keep it open for the
whole duration of C application.
The following PRAGMAs are used to open the database:

 page_size=1024
 max_page_count=5120
 count_changes=OFF
 journal_mode=OFF
 temp_store=MEMORY

When we hit the limit with the INSERT statement we get back
SQLITE_FULL, which is fine and is expected at some point. However, all
subsequent SELECTs or, in fact, any other DB interactions return
SQLITE_CORRUPT. That is until we close and re-open the same database
again, we can then SELECT,DELETE and UPDATE without a problem.

Is this intended behaviour?
Are we doing something wrong?

Thanks in advance,

Andrei
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Problem with

2010-11-03 Thread uncle.f
Hello,

We are running SQLite 3.7.3 on an embedded device and using C API to
interact with the DB. One of our goals is to ensure that the database
on disk never grows past certain size. We open DB connection once and
it stays open for the whole duration of C application. The following
PRAGMAs are used to open the database:

page_size=1024
max_page_count=5120
count_changes=OFF
journal_mode=OFF
temp_store=MEMORY

When we hit the limit with the INSERT statement we get back
SQLITE_FULL, which is fine and is expected at some point. However, all
subsequent SELECTs or, in fact, any other DB interactions return
SQLITE_CORRUPT. That is until we close and re-open the same database
again, we can then SELECT,DELETE and UPDATE without a problem.

Is this intended behaviour?
Are we doing something wrong?

Thanks in advance,

Andrei
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users