Hartwig,
You have got most of the tricks we know about. Other more experienced
developers may provide a better insight.
We had to moved about 60GB of table data about and we ended up doing
what you have done with one extra bit, we batched the jobs up in
multiples of 10,000 between BEGIN and END to make transactions out of
them. It's not clear if you are doing that.
Rob
On 10 Sep 2019, at 16:02, mailing lists wrote:
I have the following situation:
- in one table relatively small data is inserted (100 bytes per
record)
- this table contains three indices
- about 100 million or more records have to be inserted
Insertion really slows down after about 100 000 items have been
inserted. I suppose that the slow down is related to indexing because:
a) removing the indices brings the speed up
b) it does not matter whether using a solid state drive or a
conventional one (the overall speed differs but not the phenomenon
itself)
c) changing the cache size has only a minor impact
So, the best solution I found so far is to disable indexing while
insertion and to index the table afterwards (this is magnitudes faster
than insertion with indexes). Are there any better solutions or other
tricks I might try (splitting table into a data and an index part)?
BTW: I am using journal_mode DELETE. WAL mode only delays the problem
and increases a bit the speed but not significantly.
Regards,
Hartwig
_______________________________________________
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
_______________________________________________
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users