Hello, I tried the scripts but.. createBigTable.sh is beyond the capacity of my system. Instead I used SQL script like in www.mail-archive.com/sqlite-users%40mailinglists.sqlite.org/msg08044.html
My point is that the definition of the table is a waste of capacity, even though it serves on many systems. Because it has a primary key and also rowid which are not the same. Is there any practical use of retaining the rowid? I tested the 1E7 case WITHOUT ROWID. The size of the database is then reduced to 222M. Drop table is a matter of seconds (for me too now). I may do further testing with more rows. Until then I have the feeling that this will scale linearly and not show instable timings any longer. Below is the output of my tests. Thanks, E Pasma case 1, like original (sqlite version 3.12 and page size 4096 here) .timer on create table uuid (uuid blob, primary key (uuid)) ; insert into uuid with r as (select 1 as i union all select i+1 from r where i<10000000) select randomblob(16) from r ; Run Time: real 6043.491 user 332.250625 sys 671.583469 .sys du -h west1.db* 505M west1.db begin ; drop table uuid ; Run Time: real 40.378 user 2.259595 sys 5.500557 rollback ; .quit case 1, drop once again (completely different timing) .sys du -h west1.db* 505M west1.db begin ; delete from uuid ; Run Time: real 241.711 user 2.246336 sys 5.981215 drop table uuid ; Run Time: real 0.000 user 0.000567 sys 0.000230 rollback ; .quit case 2, without rowid .timer on create table uuid (uuid blob, primary key (uuid)) without rowid ; insert into uuid with r as (select 1 as i union all select i+1 from r where i<10000000) select randomblob(16) from r ; Run Time: real 1141.098 user 294.535994 sys 573.902807 .sys du -h west2.db* 222M west2.db begin ; drop table uuid ; Run Time: real 1.974 user 0.844361 sys 1.095968 rollback ; begin ; delete from uuid ; Run Time: real 1.924 user 0.829793 sys 1.060908 drop table uuid ; Run Time: real 0.006 user 0.000734 sys 0.002387 rollback ;