Hello Domingo,
DAD>db.exec_dml("PRAGMA synchronous = 0;");
DAD>db.exec_dml("PRAGMA journal_mode = WAL");
DAD>//db.exec_dml("PRAGMA journal_mode = MEMORY;");
DAD>//db.exec_dml("PRAGMA journal_mode = OFF;");
DAD>//db.exec_dml("PRAGMA locking_mode = EXCLUSIVE;");
DAD>db.exec_dml("PRAGMA temp_store
Thanks for the bug report. The ticket is
https://www.sqlite.org/src/tktview/7b3328086a5c1
Your analysis of the problem seems spot on. I'm guessing you spent a
long time debugging his Your efforts are appreciated.
On 10/2/16, Clemens Ladisch wrote:
> Hi,
>
> if the
We tend to use append only. We delete some small transient data as we
use them, this transient data maintains a bit of state between appends.
We're talking 300-500 lines of a table.
It's not read only, we add a lot of data every 3-5 minutes, but its all
driven from a single Perl process that
On 2 Oct 2016, at 10:47pm, Domingo Alvarez Duarte wrote:
> How do you use those big databases ?
Just a note that it's not the big databases which are a problem. Like another
poster I too use big databases in production -- 43GB in my case. And that
particular database
Hello Rob !
How do you use those big databases ?
Append only ? Read only ? Constant read write ? Foreign keys ?
Could you show the schema to talk about it ?
Cheers !
On 02/10/16 17:44, Rob Willett wrote:
We have production databases with 30-50GB and have no issues with
managing them. Mind
Hello Richard !
On 02/10/16 18:02, Richard Hipp wrote:
For your specific UPDATE, it seems like SQLite ought to be able to do
it all in one pass, and thus avoid using all that memory to store the
rowids. This looks like an opportunity to improve the query planner.
That would be nice, also I
On 10/2/16, Domingo Alvarez Duarte wrote:
> Hello !
>
> I'm still fighting to use sqlite with a 20GB database and now I'm trying
> to update a table with 980M rows but sqlite is eating all my memory
> (2GB) and making blocking my computer (too much swap).
>
> I'm in doubt if
You do realize that your UPDATE is spooling an update to 980 Million rows
(980 Million integer updates means the update spool needs to contain
980 Million 64-bit integer primary keys plus 980 Million updates
which is probably at LEAST 4GB of temporary data)
-- even if the correlated
Domingo Alvarez Duarte wrote:
> I'm still fighting to use sqlite with a 20GB database and now I'm
> trying to update a table with 980M rows but sqlite is eating all my
> memory (2GB) and making blocking my computer (too much swap).
IIRC SQLite keeps a list of (the IDs of) all rows to be updated.
Jens Alfke wrote:
> if the key is null and the table has a rowid, the node would be small, on the
> order of 10 bytes or so … ?
Typically less than 10 bytes.
Regards,
Clemens
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
We have production databases with 30-50GB and have no issues with
managing them. Mind you we have more memory, though we only have 8GB as
thats all our VM provider allows us.
After saying that we have never seen any performance issues that weren't
due to our schemas or our bad design. Our
Hi,
if the transfer optimization needs to check if the destination table is
empty, and if the optimization is actually used, and if the destination
table uses AUTOINCREMENT, then the generated code does not update the
sqlite_sequence table.
After these statements, the row in dst should have
On 2 Oct 2016, at 9:35pm, Domingo Alvarez Duarte wrote:
> I'm still fighting to use sqlite with a 20GB database and now I'm trying to
> update a table with 980M rows but sqlite is eating all my memory (2GB) and
> making blocking my computer (too much swap).
I assume you
Hello !
I'm still fighting to use sqlite with a 20GB database and now I'm trying
to update a table with 980M rows but sqlite is eating all my memory
(2GB) and making blocking my computer (too much swap).
I'm in doubt if sqlite can really manage databases above 1GB.
Have someone any
> On Oct 2, 2016, at 6:20 AM, Clemens Ladisch wrote:
>
> Changing the function in any way (including not registering the
> function) would essentially corrupt the index.
Well, the same can be said of using a custom collation function, which has been
supported since 3.0; or
Jens Alfke wrote:
> the WHERE clause in a CREATE INDEX statement explicitly disallows
> function calls, so I can’t constrain the index to only the rows that
> contain the JSON property. Is this limitation something that might be
> lifted soon (now that functions can be identified as ‘pure’), or is
* Domingo Alvarez Duarte:
> After 12 hours inserting of:
>
> 934,135,285 records on bolsas_familia
>
> 22,711,259 records in favorecidos
>
> 5,570 records in municipios
>
> ...
Insertion will be faster if you create the index after populating the
tables.
> time sqlite3 bolsa_familia3.db
17 matches
Mail list logo