On 4 May 2010, at 1:54pm, Romica Dascalescu wrote:

> We are using sqlite to generate a schema with ~600 tables + 500 indexes + 50 
> views and some other triggers.

Apparently your developer is some kind of Autistic data-warehousing genius who 
can keep 600 tables with different columns straight in his or her head.  I tend 
to lose track trying to remember the structures of more than 6.

> Our module:
> 
> -          Opens a transaction
> -          Creates a table
> -          Inserts some rows
> -          Commits the transaction
> 
> It seems this takes a lot of time to execute (doing it for all objects 
> created),

Well sure.  You have created 600 + 500 + 50 == 1150 items before you even 
insert the first row in the first table.  And every time you switch from one 
table to another your SQLite will be trying to access a different page.

I have trouble believing you really need 600 different tables.  Work out a data 
structure which can keep most of this data in far fewer tables.  If you can't 
do it, explain your 600 tables to us and we'll do it for you.

> however in case we close and reopen the connection from time to time (at each 
> 1000th transaction) things are working way faster. It looks like sqlite 
> engine is doing something wrong when transactions are over used.

If you actually need to insert a huge amount of data at one time, the best 
order is probably going to be this:

Create tables
Begin a transaction
Insert data
End transaction
Create indices
Create views

By all means, split your INSERTS into batches of 1000 each wrapped in its own 
transaction, if only because it gives you a restart position if the whole thing 
crashes.

> Is there a known bug regarding this?

I can't think of anyone who considers a 600 table schema for a single database 
as being worth testing.

Simon.
_______________________________________________
sqlite-users mailing list
[email protected]
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to