* Domingo Alvarez Duarte:
> After 12 hours inserting of:
>
> 934,135,285 records on bolsas_familia
>
> 22,711,259 records in favorecidos
>
> 5,570 records in municipios
>
> ...
Insertion will be faster if you create the index after populating the
tables.
> time sqlite3 bolsa_familia3.db
The vacuum removes empty pages by rewriting the database ground up.
https://www.sqlite.org/lang_vacuum.html
"The VACUUM command works by copying the contents of the database into a
temporary database file and then overwriting the original with the contents
of the temporary file. When overwriting
Domingo Alvarez Duarte wrote:
Hello Simon !
I already did it without using "wal" and the result was the same.
And even for my surprise in one try I stopped at the middle performed an
"analyze" and the performance deteriorated a lot to a point that I needed to
delete the stats tables to get the
What was the size of the original database?
To VACUUM a database, the process is:
1) Read the logical components of the database, write them to a new file. This
will generate at least 1x reads (original size), and 1x writes (final size).
In most cases the reads will be larger and the
Hello Bob !
I'm using the default sqlite page size, but I also did a try with 32KB
page size and I've got a bi smaller overall database size but no visible
perfomance gain in terms of time and I/O.
Also the memory usage skyrocked, also forcing memory swap.
The OS was OS X yosemite, I also
On Sat, 1 Oct 2016, Domingo Alvarez Duarte wrote:
Hello !
I'm using sqlite (trunk) for a database (see bellow) and for a final database
file of 22GB a "vacuum" was executed and doing so it made a lot of I/O (
134GB reads and 117GB writes in 2h:30min).
What means are you using the evaluate
//print("cache_size", db.exec_get_one("PRAGMA cache_size;"));
On 01/10/16 19:21, Keith Medcalf wrote:
Did you change the cache size? The default is rather small for a database of
22 GB.
-Original Message-
From: sqlite-users [mailto:sqlite-users-boun...@mailinglists.sqlit
te mailing list
> Subject: Re: [sqlite] Why so much I/O ? Can sqlite be improved ?
>
> Hello Simon !
>
> Thanks for reply !
>
> I already know your suggestions and if you look at the database schema
> and the program used to insert data you can see that there is no
>
Hello Simon !
I already did it without using "wal" and the result was the same.
And even for my surprise in one try I stopped at the middle performed an
"analyze" and the performance deteriorated a lot to a point that I
needed to delete the stats tables to get the better performance without
On 1 Oct 2016, at 10:18pm, Domingo Alvarez Duarte wrote:
> About the vacuum I also understand the need to rewrite the whole database but
> I'm not sure if it's really necessary to do almost 5 times the database size
> in both reads and writes (also an equivalent amount of
Hello Simon !
Thanks for reply !
I already know your suggestions and if you look at the database schema
and the program used to insert data you can see that there is no
unnecessary indices active and all inside transactions.
About the vacuum I also understand the need to rewrite the whole
On 1 Oct 2016, at 9:27pm, Domingo Alvarez Duarte wrote:
> I'm using sqlite (trunk) for a database (see bellow) and for a final database
> file of 22GB a "vacuum" was executed and doing so it made a lot of I/O (
> 134GB reads and 117GB writes in 2h:30min).
>
> Can
Hello !
I'm using sqlite (trunk) for a database (see bellow) and for a final
database file of 22GB a "vacuum" was executed and doing so it made a
lot of I/O ( 134GB reads and 117GB writes in 2h:30min).
Can something be improved on sqlite to achieve a better performance ?
The data is public
13 matches
Mail list logo