[sqlite] Insertion into the large DB drastically slows down at some point

2015-08-28 Thread Sergej Jurečko
> On modern PCs, SQLite's page cache does not have a large effect because > of the file cache of the OS, but you should do, e.g., > PRAGMA cache_size = -100; > for a 1 GB cache (default is only a few MB). Hitting the page cache is much cheaper as it does not involve a system call. Try

[sqlite] Insertion into the large DB drastically slows down at some point

2015-08-28 Thread Clemens Ladisch
Yuri wrote: > I followed every advise I could find: > * Database is written into memory > * Whole insertion happens in one transaction in one giant BEGIN TRANSACTON; > ... END TRANSACTION; block. > * Foreign keys are deferred: PRAGMA defer_foreign_keys=ON; > * Journal is disabled: PRAGMA

[sqlite] Insertion into the large DB drastically slows down at some point

2015-08-28 Thread R.Smith
On 2015-08-28 04:15 AM, Yuri wrote: > I build a fairly large DB, with major tables having 800k..5M rows, > with several relationships between tables. > > At ~30% into the insertion process it slows down rapidly. Records > #171k..172k are still fast, and records #172k...173k are already ~10 >

[sqlite] Insertion into the large DB drastically slows down at some point

2015-08-28 Thread Simon Slavin
On 28 Aug 2015, at 3:45am, Yuri wrote: > My computer has 24GB ob RAM of which 4GB is free. At the time of the problem > the size of the process is 325MB. And the size of complete DB is ~250MB. So > this isn't it. > > What's worth mentioning though is that at the time of the event in question

[sqlite] Insertion into the large DB drastically slows down at some point

2015-08-28 Thread Simon Slavin
On 28 Aug 2015, at 3:15am, Yuri wrote: > * Database is written into memory > [...] > How can I understand why the slowdown occurs? Especially, why the slowdown in > so "sharp"? Your computer has a certain amount of free memory. Once your database is bigger than that size the computer has to

[sqlite] Insertion into the large DB drastically slows down at some point

2015-08-28 Thread Yuri
Thanks to everybody who made suggestions. There was the legitimate constraint violation caused by a bug in an importing program. Program was erroneously inserting zero integer into the field that is both the leading part of the primary key (possibly causing its non-uniqueness), and a foreign

[sqlite] Insertion into the large DB drastically slows down at some point

2015-08-27 Thread Richard Hipp
How many indexes on your data? Can you DROP your indexes for the insert, then CREATE INDEX them again after all the content is in place? -- D. Richard Hipp drh at sqlite.org

[sqlite] Insertion into the large DB drastically slows down at some point

2015-08-27 Thread Stephen Chrzanowski
I'm with Simon. If you're writing 5 million rows to memory, the OS has to move unused memory out to disk. I'd say write the data to disk (SSD if you can) and check if the data written out is at a consistent speed. On Thu, Aug 27, 2015 at 10:29 PM, Simon Slavin wrote: > > On 28 Aug 2015, at

[sqlite] Insertion into the large DB drastically slows down at some point

2015-08-27 Thread Yuri
On 08/27/2015 19:45, Richard Hipp wrote: > How many indexes on your data? > > Can you DROP your indexes for the insert, then CREATE INDEX them again > after all the content is in place? Only indexes needed during the insert are in DB. Only 2 indexes exist, plus there are several "uniq"

[sqlite] Insertion into the large DB drastically slows down at some point

2015-08-27 Thread Yuri
On 08/27/2015 19:29, Simon Slavin wrote: > Your computer has a certain amount of free memory. Once your database is > bigger than that size the computer has to keep moving parts of the database > into storage so it has room for the new data. > > I will guess that if you got more RAM in your

[sqlite] Insertion into the large DB drastically slows down at some point

2015-08-27 Thread Yuri
I build a fairly large DB, with major tables having 800k..5M rows, with several relationships between tables. At ~30% into the insertion process it slows down rapidly. Records #171k..172k are still fast, and records #172k...173k are already ~10 times slower. (all records are more or less